Whilst many things were affected, the rate of innovation in the AWS Cloud wasn’t one of them! One of the unexpected highlights from re:Invent 2020 was being able to watch sessions from the comfort of home. No 15 hour flights to Las Vegas were harmed in the making of this blog. No hangovers were required. There was no military planning of the schedule or running the strip between sessions. Yes, 2020 was different for a lot of reasons. A virtual re:Invent in the scheme of a global pandemic was small beans.

Here are our key picks from the 3 weeks of virtual re:Invent, in no particular order.

A second Australian region

What is it?

The Sydney region has been operating since November 2012 and has 3 availability zones. While this provides a highly resilient service, what if you want more? What if you are restricted to keeping your data in Australia? During re:Invent, AWS announced that planning for a second region was already in train. Due to open in 2022, the Melbourne region (ap-southeast-3?) will provide the ability to fail over to a second region whilst keeping all your data in Australia.

Why do we like it?

Although there is an Edge location in Melbourne, a second local region is pretty exciting. It will allow a richer disaster recovery capability for those that need it, whilst keeping data on-shore. All AWS regions, unlike some other Cloud providers, are built with multiple independent availability zones. The Melbourne region will be no different. Of course, both regions will be interconnected using AWS’s high speed and secure backbone.

What does it mean for our customers?

Simply, it will provide more choice with 2 regions locally. Also, for those that require a multi-region failover capability, that will now be supported on-shore.

Find out more – https://aws.amazon.com/blogs/aws/in-the-works-aws-region-in-melbourne-australia/

Babelfish for Aurora PostgreSQL

What is it?

Yes, it is a little yellow fish, but not in this context! Babelfish is a translation layer that will take the Microsoft SQL Server T-SQL proprietary protocol and convert it to PostgreSQL running on top of the Cloud scale Aurora database. For further details, Ryan has covered it in it’s own blog post.

Why do we like it?

In recent times, Microsoft has made it progressively more expensive to run SQL Server on a non-Microsoft Cloud. Migrations to a different database engine can be complex and costly. Babelfish will help to simplify that and progressive move away from a legacy database.

What does it mean for our customers?

Plenty of our customers are stuck with expensive database options. Even relatively simple applications are dependent on a proprietary database. Babelfish will open the door to migration to a Cloud scale database using an open source format. It’s not designed to be run as a shim for the long term. What it will do, is reduce the barrier to a Cloud migration.

Find out more – https://aws.amazon.com/rds/aurora/babelfish/

AWS Fault Injection Simulator

What is it?

Fault Injection Simulator is effectively chaos engineering as a service. If you don’t know what chaos engineering is, google “Netflix chaos monkey”. Basically, chaos engineering is the process of stressing an application in an environment by creating disruptive events, such as server outages or network latency. It allows you to observe how the system responds, and then implement improvements. Chaos engineering helps you simulate the real-world conditions needed to uncover unseen issues, monitoring blind spots, and performance bottlenecks that can be difficult to find. FIS will even simulate failures of AWS services!

Why do we like it?

Running chaos engineering is hard. Add in trying to simulate failures of the underlying platform, in this case AWS, and it is exponentially more complex. That’s where FIS comes in to play.

What does it mean for our customers?

Using this new service, it will be far easier for our customers to prove the availability and recoverability of their workloads. In fact with automation, we will be able to prove that almost continuously. This will allow us to breakaway from a traditional approach of only performing these kinds of tests at times of major changes.

Find out more – https://aws.amazon.com/fis/

io2 Block Express

What is it?

Spinning up storage in AWS has always been easy. There are a ton of options from super durable S3 object storage to I/O provisioned storage. Even though high performance options have been provided by AWS in the past, if you needed really intensive I/O performance you had to either use a virtual storage appliance or strip a bunch of EBS volumes using a software volume manager. io2 Block Express has solved that problem. The new volumes will give you up to 256K IOPS and 4000MBps of throughput with a maximum volume size of 64 TiB, all while delivering sub-millisecond, low-variance I/O latency! That’s huge!

Why do we like it?

Getting screaming storage performance has required either additional cost or complexity (and often both!). Now with io2 Block Express, you can just turn it on like any other storage  layer and scale as required. You can forget about having to manage the storage performance!

What does it mean for our customers?

Our customers will see ultra high performance for their SAP HANA or Microsoft SQL Server workloads. It is equally applicable to mission-critical transaction processing applications. All this performance will be available whilst being able to decommission third party appliances or software solutions, ultimately reducing cost and complexity!

Find out more – https://aws.amazon.com/blogs/aws/now-in-preview-larger-faster-io2-ebs-volumes-with-higher-throughput/

Lambda enhancements

What are they?

As AWS’s serverless technology, Lambda is super critical to our operations and to our customers. Being able to run small lightweight functions that perform singular actions without worrying about infrastructure is liberating! After all, no server is easier to manage than no server! We are able to run hundreds of actions in parallel and in response to security events in our environments. These short running functions will really benefit from the new per millisecond billing. Until now, billing has been in per 100 millisecond blocks.¬†

Sometimes, we want to do some data enrichment and a little more processing grunt is required. Now you can allocate up to 10GB of memory and 6 vCPUs to a Lambda function, almost 3x the previous limit!

Finally, if you are a container user, you can now use your container image tooling and workflows to build Lambda applications.

On top of that, as heavy users of Python for our functions, we are excited about Amazon CodeGuru support for Python (ok, that’s not strictly a Lambda enhancement but we like it!). CodeGuru is an automated code review service that uses machine learning to assess the code against best practice development and security standards.

Why do we like them?

Lambda is a cornerstone of our managed services solutions, particularly as it relates to automation. These enhancements will not only make it more cost effective but also increase the opportunity to use Lambda over managing EC2 instances!

What do they mean for our customers?

As our customers looks to break apart their monolithic applications and adopt a serverless approach, more options for how they use Lambda can only be a huge benefit. We have seen customers reduce their run cost and operational complexity by moving from an instance based to a serverless based solution. Add on the ability to access persistent storage through EFS, that was released earlier in 2020, and a vast number of event driven solutions are now highly suitable to a Lambda solution. These enhancements only make that more true than ever!

Find out more –

https://aws.amazon.com/blogs/aws/new-for-aws-lambda-functions-with-up-to-10-gb-of-memory-and-6-vcpus/

https://aws.amazon.com/blogs/aws/new-per-second-billing-for-ec2-instances-and-ebs-volumes/

https://aws.amazon.com/blogs/aws/new-for-aws-lambda-container-image-support/

https://aws.amazon.com/codeguru/

Honourable mentions

As with every year at re:Invent, choosing just a few highlights is never an easy task! No list is ever complete without a few honourable mentions. In fact, all of these could easy have made the cut as one of the highlights of re:Invent 2020!

  • AWS Proton – AWS Proton is a fully managed application deployment service for container and serverless applications. Engineering teams can use Proton to connect and coordinate all the different tools needed for infrastructure provisioning, code deployments, monitoring, and updates. Maintaining hundreds of microservices with constantly changing resources and CI/CD configurations is a difficult task for even the most capable platform teams. See our AWS Proton test drive!
  • AWS CloudShell – CloudShell is a browser based Linux shell available in the AWS console. It comes pre-installed with the AWS CLI and pre-authorised with the same credentials you used to log on to the console. CloudShell makes it easy to securely manage, interact with and explore your resources from the command line;
  • Observability enhancements – there were a heap of announcements around monitoring and observability. From the machine learning based Amazon Lookout for Metrics to the AWS Distro for OpenTelemetry, it is now easier to consume data and identify potential issues and incidents faster than ever;
  • AWS Network Firewall – although it was announced in the run up to the event and therefore doesn’t qualify as one of the highlights of re:Invent, we love this new service. Follow along as we fired it up in our AWS environment.

Well, it might have been a virtual ride, but it’s been a hectic one all the same. If you want to understand more about these or any other enhancements and services from AWS re:Invent, please get in contact with us at RedBear.

Close Menu