Import/Export data from DynamoDB (small amounts)

I use this CLI command for exporting and then later importing data from/to DynamoDB (small amounts of data – eg. upto a few hundred items).

It does take a while to get data back into DynamoDB, as it’s doing it line-by-line, rather than as a batch … but gets the job done!


aws dynamodb scan --table-name source-table-name --no-paginate > data.json


cat data.json | jq -c '.Items[]' | while read -r line; do aws dynamodb put-item --table-name destination-table-name --item "$line"; done

This can be done in one line as well;

aws dynamodb scan --table-name source-table-name --no-paginate | jq -c '.Items[]' | while read -r line; do aws dynamodb put-item --table-name destination-table-name --item "$line"; done

Credit goes to;

Cleaning up old DynamoDB Auto-Scaling Resources

I’ve found a strange problem with cloud-formation roll-backs which don’t automatically remove any Auto Scaling resources you might have setup.

This means then when you next deploy, CloudFormation starts complaining about resources already existing!

To clean these up, you need the following (run from the CLI, using the AWS CLI);

List resources;

aws application-autoscaling describe-scalable-targets --service-namespace dynamodb

From there, de-register (remove) each of the ones which shouldn’t be there;

aws application-autoscaling deregister-scalable-target --service-namespace dynamodb --resource-id "table/myTableName" --scalable-dimension "dynamodb:table:ReadCapacityUnits"

aws application-autoscaling deregister-scalable-target --service-namespace dynamodb --resource-id "table/myTableName" --scalable-dimension "dynamodb:table:WriteCapacityUnits"

That’s it!

AWS Invoking Lambda functions from CLI

aws lambda invoke \
  --function-name <your function name> \
  --payload '"<your json payload>"' \ 
  --cli-binary-format raw-in-base64-out /dev/stdout

The raw-in-base64-out lets you skip having to base64 encode the payload.

The /dev/stdout bit at the end just shows the output on your screen, rather than outputting it to a file and then having to read that file.



Unit-testing Bref lambda handlers

Hopefully this helps someone out there unit-testing Bref lambda consumers (eg. AWS lambda handlers for SNS / EventBridge / SQS, etc) with PHPUnit;

Essentially this includes the consumer (which is essentially a PHP function), and calls the function with array of event-data (in the same format AWS would normally give it).

The function (handler) would then return a response (hopefully with no thrown errors), and any unit-testing on the result would be done.

public function testConsumeUpdatePerson() {
        $handler = include(__DIR__ . '/../bin/consume');

        $data = json_encode([
            'action' => 'update-person',
            'id' => 1234
        $overallJson = '{
  "Records": [
      "EventVersion": "1.0",
      "EventSubscriptionArn": "arn:aws:sns:us-east-2:123456678:sns-lambda:abc-123",
      "EventSource": "aws:sns",
      "Sns": {
        "SignatureVersion": "1",
        "Timestamp": "2019-01-02T12:45:07.000Z",
        "Signature": "aaaabbbb/ccccdddd/111111==",
        "SigningCertUrl": "",
        "MessageId": "aaaabbbbb",
        "Message": "' . addslashes($data) . '",
        "MessageAttributes": {
          "Test": {
            "Type": "String",
            "Value": "TestString"
          "TestBinary": {
            "Type": "Binary",
            "Value": "TestBinary"
        "Type": "Notification",
        "UnsubscribeUrl": ";SubscriptionArn=arn:aws:sns:us-east-2:111122222:test-lambda:aaaaa-bbbbb",
        "TopicArn" : "arn:aws:sns:ap-southeast-2:1111222222:topic-name-goes-here",
        "Subject": "TestInvoke"
        $event = json_decode($overallJson, true);

        $response = $handler($event, new Context('', 300, '', ''));
        $this->assertEquals('OK', $response);

More unit-tests can obviously be added below, but the basics of this test that there’s no errors, unhandled exceptions, etc which you hadn’t fully tested otherwise

Serializer – array to Object

To convert from an array (including multi-dimensional arrays) to an object, the following code might help!


// all callback parameters are optional (you can omit the ones you don't use)
$extractor = new PropertyInfoExtractor([], [new PhpDocExtractor(), new ReflectionExtractor()]);
$normalizer = new ObjectNormalizer(null, null, null, $extractor);
$serializer = new Serializer(
        new ArrayDenormalizer(),

$obj = $serializer->denormalize(
    [ObjectNormalizer::DISABLE_TYPE_ENFORCEMENT => true]


This can also be achieved by installing the following packages, which Symfony will pickup & use with it’s serializer;

composer require phpdocumentor/reflection-docblock
composer require symfony/property-info

Then, in your service (or controller);

public function __construct(SerializerInterface $serializer) {
    $this->serializer = $serializer;

And in your code;

$obj = $this->serializer->denormalize(

You can also lessen the enforcement of variable-types (strings vs ints) with the following;

$obj = $serializer->denormalize(
    [ObjectNormalizer::DISABLE_TYPE_ENFORCEMENT => true]


Serverless – creating DNS entries for API Gateway

The following can be included in your serverless.yml file to create a sub-domain in Route53, and link it upto your API Gateway (HTTP) endpoint.

The following variables are required in your ‘custom’ block in the serverless.yml file;

  • certificate_arn – this is the ARN of your AWS Certificate Manager SSL certificate. For regional endpoints, this should be a cert created in the same region as your API Gateway.
  • domain_hosted_zone – the zone name of your domain name (eg. if your subdomain you want is, the domain_hosted_zone will be
  • domain_name – this is the complete sub-domain (eg.


  domain_hosted_zone: ''
  domain_name: ''
  certificate_arn: 'arn:aws:acm:ap-southeast-2:1233456:certificate/abc123'
      Type: 'AWS::ApiGatewayV2::DomainName'
          - CertificateArn: ${self:custom.certificate_arn}
        DomainName: ${self:custom.domain_name}

      Type: 'AWS::ApiGatewayV2::ApiMapping'
        ApiId: !Ref HttpApi
        DomainName: !Ref APIDomainName
        Stage: !Ref HttpApiStage
      DependsOn: [ APIDomainName ]

      Type: AWS::Route53::RecordSetGroup
        HostedZoneName: ${self:custom.domain_hosted_zone}
          - Name: !Ref APIDomainName
            Type: A
              DNSName: !GetAtt APIDomainName.RegionalDomainName
              HostedZoneId: !GetAtt APIDomainName.RegionalHostedZoneId


Chocolate melting mini-cakes


  • 3/4 cup dark chocolate chips
  • 3/4 cup butter
  • 4 eggs (room temperature)
  • 3/4 cup sugar
  • 1/8 teaspoon vanilla extract
  • 1/4 cup flour


  1. Preheat oven to 190 degrees c
  2. Melt chocolate and butter in a small saucepan, cool 10 minutes.
  3. In the meantime, in a separate bowl, whisk eggs and sugar together.
  4. Add vanilla extract/essence
  5. Add flour and whisk until flour is well mixed in.
  6. When chocolate has cooled stir in egg mixture.
  7. Fill 7 oz. ramekins about 3/4 of the way full with chocolate batter.
  8. Bake for 15-20 mins (don’t go overboard or it’ll be huge!)

The cake should be cake spongy on the top but the middle of the cake should be melty and gooey-the consistency of pudding, not too runny.
Do not let it overcook. Watch these babies closely!
Serve with ice cream or whipped cream.

Credit goes to:

Including the git tag as an environment var in AWS Lambda (via Bitbucket Pipeline Deployments & serverless)

When deploying with the Serverless framework (which Bitbucket Pipelines can do), I wanted to include a version number (or other vars & options passed in the Serverless CLI) which triggered the deploy (via Bitbucket Pipelines).

In my case, this is shown in the footer of a Symfony web-app (more on that below).

Here’s how this can be achieved;


In serverless.yml, we need to define our env-var within the function (or as i’ve done, for all functions, by placing it in the ‘provider’ -> ‘environment’ variables);

DEPLOY_VERSION: ${opt:deploy-version, 'unknown'}

In the above example, my ENV file will be called ‘DEPLOY_VERSION’

The ‘${opt:…} basically gets an option we’ve specified in the serverless deploy command-line (eg. serverless deploy –deploy-version v1.2.3 )

This allows us to pass environment vars from the command line, to our functions (in our case, we’re saying version 1.2.3 of our software is getting deployed).

Then, in Bitbucket;

Next, in our bitbucket-pipelines.yml file, we need to include some extra vars in the ‘atlassian/serverless-deploy:…’ pipe – eg;

EXTRA_ARGS: '... --deploy-version $BITBUCKET_TAG'

Here, we just specify our own option called ‘deploy-version’ (eg. ‘–deploy-version’), and used a variable which bitbucket includes at deploy-time (in our case, it’s called BITBUCKET_TAG).

In my case, i’m using tags to deploy new version of an app (eg. v1.2.3)

Using it with Symfony

From there, it’s upto you how your AWS Lambda function actually uses the environment variable. In my case, i’m using Symfony (with Bref to run it on Lambda). This requires an additional couple of steps;

In the .env file, I need to specify my default value for the env file (eg. when i’m developing it locally, etc);


From there, in my case I then include it as a global variable in my templates, by adding it to my ‘config/packages/twig.yaml‘ file;

    deploy_version: '%env(DEPLOY_VERSION)%'

        deploy_version: '%deploy_version%'

And then in the footer of my pages, I can include it (eg. base.twig.html);

<p><small>Version: {{ deploy_version }}</small></p>


In summary, now when we deploy via Bitbucket Pipelines, we’ll have the version number used in the tag, included in our Symfony app (or whatever Lambda function you have).

Of course this could be used for any variable available in Bitbucket Pipelines (or event via the command-line in the Serverless framework)


Serverless Framework / API Gateway Quirks

So, the Serverless framework is pretty awesome!

But … out of the box, it needs a few options setup to work as well as a regular server!

  • Compression
  • Serving binary files (images/pdf files/etc – stuff your app generates and tries to send to the user)

Binary files

By default API Gateway will have all sorts of encoding issues if you don’t set this up, and try to send binary files to your users. To set it up;

      - '*/*'


This is one which I hadn’t even thought of until I was browsing the site on a slowish connection!

By default content will be sent from API Gateway uncompressed. Whilst your users might not see much of a different, you could find yourself sending a lot more data than is needed (I had over a 10x saving in bandwidth … from 100kb to 6kb for JSON data).

To enable it, set;

  name: aws
    minimumCompressionSize: 1024

1024 (1kb) is used as a minimum size at which compression is used. You can set it to ‘0’ to compress everything, but the docs mention if you do-so, some small responses (less than 1kb) might actually be larger.


Ref for these, and more options;