Crates.io | lambda-debugger |
lib.rs | lambda-debugger |
version | 0.2.1 |
source | src |
created_at | 2024-08-05 22:23:48.886858 |
updated_at | 2024-08-05 22:23:48.886858 |
description | AWS Lambda Runtime Emulator for local and remote debugging |
homepage | |
repository | https://github.com/rimutaka/lambda-debugger-runtime-emulator |
max_upload_size | |
id | 1326604 |
size | 95,182 |
This emulator allows running Lambda functions locally with either a local payload from a file or a remote payload from AWS as if the local lambda was running there.
cargo install lambda-debugger
Use this method for simple use cases where a single static payload is sufficient.
{"command": "echo"}
into test-payload.json
filecargo lambda-debugger test-payload.json
cargo run
in a separate terminalThe lambda will connect to the emulator and receive the payload. You can re-run your lambda with the same payload as many times as needed.
Use this method to get dynamic payload from other AWS services or when you need to send back a dynamic response, e.g. to process a request triggered by a user action on a website involving API Gateway as in the following diagram:
Remote debugging configuration
This project provides the tools necessary to bring the AWS payload to your local machine in real-time, run the lambda and send back the response as if the local lambda was running on AWS.
This Lambda emulator does not provide the full runtime capabilities of AWS:
cargo run
cargo run
Create two SQS queues with an identical configuration:
proxy_lambda_req
for requests to be sent from AWS to your local lambda under debugging. Required.proxy_lambda_resp
if you want responses from your local lambda to be returned to the caller. Optional.See Advanced setup section for more info on how to customize queue names and other settings.
Recommended queue settings:
This IAM policy grants proxy-lambda access to the queues. It assumes that you already have sufficient privileges to access Lambda and SQS from your local machine.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::512295225992:role/lambda_basic"
},
"Action": [
"sqs:DeleteMessage",
"sqs:GetQueueAttributes",
"sqs:ReceiveMessage",
"sqs:SendMessage"
],
"Resource": "arn:aws:sqs:us-east-1:512295225992:proxy_lambda_req"
}
]
}
You need to replace Principal and Resource IDs with your values before adding the above policy to your queues:
Use different Resource values for request and response queues:
arn:aws:sqs:[your_region]:[your_aws_account]:proxy_lambda_req
for proxy_lambda_req
queue
arn:aws:sqs:[your_region]:[your_aws_account]:proxy_lambda_resp
for proxy_lambda_resp
queue
The proxy-lambda function should be deployed to AWS in place of the function you want to debug.
Replace the following parts of the bash script below with your values before running it from the project root:
x86_64-unknown-linux-gnu
us-east-1
my-lambda
target=x86_64-unknown-linux-gnu
region=us-east-1
name=my-lambda
cargo build --release --target $target
cp ./target/$target/release/proxy-lambda ./bootstrap && zip proxy.zip bootstrap && rm bootstrap
aws lambda update-function-code --region $region --function-name $name --zip-file fileb://proxy.zip
A deployed proxy-lambda should return OK or time out waiting for a response if you run it with a test event from the AWS console. Check CloudWatch logs for a detailed execution report.
Pre-requisites:
Launching the local lambda:
cargo lambda-debugger
in a separate terminalcargo run
in the same terminal where you added the env varsDebugging:
Success, failure and replay:
proxy_lambda_resp
)proxy_lambda_req
queue when the local lambda completes successfullyproxy_lambda_resp
queue after forwarding it to the caller, e.g. to API Gatewayproxy_lambda_resp
queue before sending a new request to proxy_lambda_resp
proxy_lambda_req
queue manually to delete stale requestsIf the local lambda fails, terminates or panics, you can make changes to its code and run it again to reuse the same incoming payload from the request queue.
By default, proxy-lambda and the local lambda-debugger attempt to connect to proxy_lambda_req
and proxy_lambda_resp
queues in the same region.
Provide these env vars to proxy-lambda and lambda-debugger if your queue names differ from the defaults:
PROXY_LAMBDA_REQ_QUEUE_URL
- request queue, e.g. https://sqs.us-east-1.amazonaws.com/512295225992/debug_requestPROXY_LAMBDA_RESP_QUEUE_URL
- response queue, e.g. https://sqs.us-east-1.amazonaws.com/512295225992/debug_responseDebugging the local lambda may take longer than the AWS service is willing to wait. For example, proxy-lambda function can be configured to wait for up to 15 minutes, but the AWS API Gateway wait time is limited to 30 seconds.
Assume that it took you 5 minutes to fix the lambda code and return the correct response. If proxy-lambda was configured to wait for that long it would still forward the response to the API Gateway which timed out 4.5 min earlier. In that case, you may need to trigger another request for it to complete successfully end-to-end.
It may be inefficient to have proxy-lambda waiting for a response from the local lambda because it takes too long or no response is necessary. Both proxy-lambda and lambda-debugger would not expect a response if the response queue is inaccessible.
Option 1: delete proxy_lambda_resp queue
Option 2: add PROXY_LAMBDA_RESP_QUEUE_URL
env var with no value to proxy-lambda and lambda-debugger
Option 3: make proxy_lambda_resp queue inaccessible by changing its IAM policy.
E.g. change the resource name from the correct queue name "Resource": "arn:aws:sqs:us-east-1:512295225992:proxy_lambda_resp"
to a non-existent name like this "Resource": "arn:aws:sqs:us-east-1:512295225992:proxy_lambda_resp_BLOCKED"
.
Both proxy-lambda and lambda-debugger treat the access error as a hint to not expect a response.
If your proxy-lambda is configured to expect a long debugging time, e.g. 30 minutes, you may want to cancel the wait for a rerun.
Since it is impossible to kill a running lambda instance on AWS, the easiest way to cancel the wait is to send a random message to proxy_lambda_resp
queue via the AWS console.
The waiting proxy-lambda will forward it to the caller and become available for a new request.
The size of the SQS payload is limited to 262,144 bytes by SQS while Lambda allows up to 6MB. proxy-lambda and lambda-debugger compress oversized payloads using flate2 crate and send them as an encoded Base58 string to get around that limitation.
The data compression can take up to a minute in debug mode. It is significantly faster with release builds.
Both proxy-lambda and lambda-debugger use RUST_LOG
env var to set the logging level and filters.
If RUST_LOG
is not present or is empty, both crates log at the INFO level and suppress logging from their dependencies.
See [https://docs.rs/tracing-subscriber/latest/tracing_subscriber/filter/struct.EnvFilter.html#example-syntax] for more info.
Examples of RUST_LOG
values:
error
- log errors only from all crates and dependencieswarn,lambda_debugger=info
- INFO level for the lambda-debugger, WARN level for everything elseproxy=debug
- detailed logging in proxy-lambda