Scaling E-commerce with Serverless: The moonpig.com Story
In this episode, we dive deep into Moonpig's migration journey from an on-premise ASP.NET monolithic application to a fully serverless architecture on AWS. Richard Pearson, Head of Engineering, and Alexis Lowe, Principal Engineer at Moonpig, share their experience transforming a 25-year-old e-commerce platform. They discuss how they tackled the challenges of migrating from SQL Server to DynamoDB, implemented multi-region deployment, and achieved seamless scalability for their peak trading periods. Learn about their "no VPC" policy, their approach to observability, and how they organized their teams to embrace DevOps culture. This episode is particularly relevant for organizations considering a similar journey to serverless architecture or looking to scale their platforms globally.
--------
37:05
--------
37:05
Deploying MCP servers on Lambda
In this episode, we dive deep into MCP (Model Context Protocol) servers on AWS Lambda. We explore what MCP is, how it enables AI systems to interact with tools through standardized protocols, and practical implementations on AWS Lambda. The discussion covers authentication mechanisms, deployment strategies, and the future potential of MCP servers as a marketplace for AI capabilities. Whether you're building AI-powered applications or interested in exposing your business capabilities to AI systems, this episode provides valuable insights into the technical aspects and business opportunities of MCP servers.
--------
49:02
--------
49:02
When AI meets biology: Using LLM to find natural alternatives to antibiotics
In this episode, we explore how Phagos, a French biotech startup, combines biology, data science, and cloud computing to combat antimicrobial resistance. Their innovative approach uses bacteriophages - natural predators of bacteria - as an alternative to antibiotics. We discuss how they leverage AWS services, including SageMaker and batch processing, to analyze genomic data and train specialized language models that can predict phage-bacteria interactions. Our guests explain how they process terabytes of genetic data, train and deploy AI models, and create user-friendly interfaces for their lab scientists. This fascinating conversation reveals how cloud computing and artificial intelligence are revolutionizing biotechnology and potentially helping solve one of this century's biggest health challenges.
--------
38:27
--------
38:27
From Monolith to Microservices: How Zilch Scaled a Modern Payment Platform
Join us for an insightful conversation with Mike Davis, Engineering Manager, and Rob Nelson, VP of Engineering at Zilch, a leading UK-based buy now pay later platform. Discover how this cloud fintech scaled from a monolithic architecture to a sophisticated microservices ecosystem serving 5 million customers. Learn about their journey migrating from MSSQL to Aurora, their innovative use of AWS services including EKS, SNS/SQS for event-driven architecture, and API Gateway for WebSocket connections. The discussion explores their unique implementation of push notifications, in-app messaging, and how they leverage generative AI for merchant discovery. Get a behind-the-scenes look at their fraud detection system using Kinesis and Flink, and hear about their upcoming physical card launch.
--------
44:58
--------
44:58
Lambda Runtimes deep dive: Behind the serverless curtain
In this episode, we dive deep into AWS Lambda runtime environments with Maxime David, Software Development Engineer in the Lambda runtime team. Discover how AWS manages and updates the foundation of serverless computing, ensuring millions of functions continue to run smoothly while being patched and updated behind the scenes. Learn about the complex deployment processes, security considerations, and the team's commitment to maintaining backwards compatibility. Maxime explains how Lambda runtimes are structured, from the operating system to language support and AWS SDK integration. We also discuss custom runtimes, the role of Firecracker in providing isolation, and the team's efforts toward open-sourcing their work