Short Notes on AWS
Contents
Can't connect to EC2 instance
The obvious 2 problems with incoming requests, that are outside of AWS's scope:
- check the instance's firewall
- check that the app is listening to all incoming (0.0.0.0/0 or your IP, not just 127.0.0.1)
On the AWS side, check the following:
- make sure the Elastic IP is associated with the instance
- find the instance in the EC2 > Instances
- look under Description tab, Elastic IP
- if it's not, go to EC2 > Elastic IPs
- choose Elastic IP from the list (or, allocate new on) that is not associated with any instance
- choose Actions > Associate address, and associate it with the instance
- make sure Security group permissions allow the connection
- go to EC2 > Security Groups
- select the security group (you can find which security group instance is in in the list on EC2 > Instances page, last column)
- on the Inbound tab, check that your protocol is enabled for Source 0.0.0.0/0 (or from your IP)
- make sure your Internet Gateway is connected to your VPC
- make sure the Internet Gateway is attached to your VPC, under VPC > Internet Gateways > Summary tab
- go to VPC > Route Tables, select route table for your VPC
- under Routes tab, make sure that route with destination 0.0.0.0/0, with Target being your internet gateway, exists and is Active
Authorization header being removed by ElasticBeanstalk
By default, AWS ElasticBeanstalk's WSGI server strips Authorization
header from requests.
To get these back, just plug your wsgi config file through .ebextensions
, adding a wsgi.authorization.config
file, with the following content:
files: "/etc/httpd/conf.d/wsgiauth.conf": mode: "000644" owner: root group: root content: | WSGIPassAuthorization On
IAM Notes
- you need policy AmazonRDSReadOnlyAccess for your IAM to be able to list RDS instances
AWS Lambda (Py) Notes
For handler(event, context) function, parameters are in:
- GET parameters in event["multiValueQueryStringParameters"]
- note: parameters are stored in arrays, the lambda's parser correctly presumes that there may be multiple values; e.g. event["multiValueQueryStringParameters"] = {"param": ["value"]}
- path parameters in event["pathParameters"]
- note: path parameters are specified in the SAM yaml, under Path property, as Path: /v1/user/{user_id}/data/
- POST/PUT body in event["body"]
- note: stored as string, you have to json.loads() or similar
boto3 Snippets
Generate presigned S3 URL
s3 = boto3.client('s3', config=botocore.client.Config(signature_version='s3v4', region_name=BUCKET_REGION)) resp = s3.generate_presigned_url('get_object', Params={'Bucket': BUCKET, 'Key': KEY}, ExpiresIn=SECONDS)
AWS Chalice
Prepare your virtualenv, and install boto3 and Chalice in it.
$ virtualenv ~/.virtualenvs/test-venv $ source ~/.virtualenvs/test-venv/bin/activate (test-venv) $ pip install boto3 chalice
Connection to RDS
Lambda's are typically not restricted to VPC, while RDS is strictly tied to it. You need to assign your Chalice Lambda's to the same subnets as the RDS, and the same security group. Chalice does almost all of this for you!
You just need to copy all subnet ID's used by your RDS, and the security group ID.
# .chalice/config.json: { "version": "2.0", "app_name": "chalice-test", "environment_variables": { "var": "value" }, "layers": ["arn:aws:lambda:..."], "stages": { "dev": { "api_gateway_stage": "api", "subnet_ids": [ "subnet-...", ... ], "security_group_ids": [ "sg-..." ] } } }
Send and Download Binary Files
There are two sides of this story, influencing both your Chalice code, and influencing the receiving code (your frontend app).
On the Chalice side, you simply use the Response object with some of the registered binary types. I typically just use application/octet-stream...
@app.route('/binary-data') def bin_echo(): raw_request_body = app.current_request.raw_body return Response(body=binary_data, status_code=200, headers={'Content-Type': 'application/octet-stream'})
For more info see the Chalice documentation
On the frontend side, you need to make sure API Gateway does not mess you up - you need to specify Accept header of the binary type (not necessarily of the same type as the one returned from Chalice). If you specify */* as the accepted type, you'll receive base64 encoded data.
Building a Python Layer
We'll be using docker Amazon Linux image.
Below, commands starting with $ are run on your machine, while those starting with bash-4.2# are run within the docker container.
$ cat docker-compose.yaml version: "3" services: amzlinux: image: "amazonlinux" command: "sleep infinity" volumes: - ./:/host $ docker-compose up Creating network "layer-example_default" with the default driver Pulling amzlinux (amazonlinux:)... [...] Attaching to layer-example_amzlinux_1 $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a5107c00ed35 amazonlinux "sleep infinity" 10 seconds ago Up 8 seconds '''layer-example_amzlinux_1''' $ docker exec -it layer-example_amzlinux_1 /bin/bash bash-4.2# yum -y update >>> bash-4.2# yum -y groupinstall "Development Tools" >>> bash-4.2# yum -y install Cython bash-4.2# yum -y install python3-pip.noarch zip [...] Installed: python3-pip.noarch 0:9.0.3-1.amzn2.0.1 bash-4.2# cd /host/ bash-4.2# mkdir -p sqlalchemy-layer/python bash-4.2# pip3 install sqlalchemy -t sqlalchemy-layer/python/ [...] Successfully installed sqlalchemy-1.3.11 bash-4.2# pip3 install psycopg2-binary -t sqlalchemy-layer/python/ [...] Successfully installed psycopg2-binary-2.8.4 bash-4.2# cd sqlalchemy-layer/ bash-4.2# zip -r aws-sqlalchemy-layer.zip python/ -x \*.pyc
Now you can upload the aws-sqlalchemy-layer.zip as a Layer through AWS console.