Key Takeaways from AWS Summit London 2022
AWS Summit London was back for the first time in two years at the ExCeL in London, on April the 27th 2022.
AWS Summits are events that bring the cloud computing community together to connect, collaborate, and learn about AWS. Summits are held in major cities around the world and attract technologists from all industries and skill levels who want to discover how AWS can help them innovate quickly and deliver flexible, reliable solutions at scale. This year’s AWS Summit had over 80 sessions, with topics including Artificial Intelligence, Machine Learning, Analytics, Digital Transformation and more.
Our Technology Recruitment team members, Muhammad Ali and Sahil Kazi, attended the event to keep up to date with the latest trends coming out of the Technology innovators and to meet with some of our clients & candidates.
Key Tech Takeaways
The focus this year was on incremental improvements and helping AWS customers make best use of existing services. This was supported by interesting customer case studies and deep dives into various AWS components and use cases. Highlights from the event included:
The keynote reiterated the value of AWS’s in house Graviton processors which offer a (workload dependent) 25% performance increase over other instance types, resulting in reduced running costs and reduced carbon footprint. Graviton processors are also available in various serverless components so in many cases will be easy to adopt. For specialist machine learning use cases there’s AWS Trainium and Inferentia which, like Graviton, offer improved performance at a given price.
The Ocado team, James Donkin (Chief Technology Officer) and Alex Harvey (Chief of Advanced Technology) presented their solution for automated picking of home shopping orders. This was an interesting optimisation problem. They made use of AWS Outposts to help meet their requirement for low latency.
There was wide coverage of SageMaker at the event, with the keynote referencing the recently released serverless inference. The training of machine learning models can be compute-intensive, but once trained their consumption is relatively light on resources. Serverless inference allows AWS customers to consume their models without managing infrastructure. The same concept was demonstrated throughout the day on the DeepRacer track. Models are trained in the cloud and then downloaded to the physical DeepRacer car with its modest Atom processor.
A subsequent session on MLOps went into depth on SageMaker, demonstrating that it is much more than a cloud hosted Jupyter notebook. SageMaker pipelines provide CI/CD workflows for developing and releasing models. This includes process templates, with governance, that can be deployed and managed through SageMaker studio. The session was then handed over to Greig Cowan, the Head of Data Science at Natwest, whose team has made extensive use of these capabilities.
A breakout session on architecture patterns for Software as a Service (SaaS) providers gave useful insight into how to provide isolation between tenants and suggested deployment options for the service’s control plane. AWS has published several whitepapers on this and there is also a well architected lens for SaaS products.
An identity and access management workshop gave insight into attribute-based access control (ABAC). This can improve operational efficiency by delegating granular level administration from admin teams to development squads. AWS’s Leela Lagudu gave a walkthrough giving access based on resource tagging.
About zefa
We are true DevOps specialists, providing with the deep knowledge and market insight needed for our clients to build great technology teams and world-class innovations.
Need to Accelerate Your Digital Transformation & hire the best DevOps talent? Whether you are looking for permanent or contract DevOps specialists, we can help.
Contact learn us to learn more about how zefa can help: