We are dropping some exciting news that we have been brewing for a couple of months now to bring you our next evolution of the Lyrid Platform as we make it even more accessible for everyone to get their codes built, deployed, and running serverless applications.
Right after our first Medium post back in December, we have gained many early users and feedbacks that allow us to focus on what we should build for our next iteration. We are truly grateful to our early believers and we made sure that they are heard. And our goal has always been the same: anything we do is to make it easy and simple for any developers that want to deploy their applications on the cloud as serverless applications.
Some of these updates have been pushed out at the end of March 2021, and we have been ironing out the release, getting more feedback, as well as getting our partners ready to use these new features for their own solutions.
With that, let us dive into the updates!
Lyrid Execution Domain (*.lyr.id)
If we have to pick which of the feature is the most important, it is this one. In this update, we created a new domain (lyr.id) for our end-users to access their applications in the public domain. This shapes how we serve the serverless function HTTP endpoint. We put a wildcard rule and certificate and created a new HTTP Proxy Service that maps a randomly generated string as a subdomain name into the serverless deployment on any public cloud that we support. Here is how the flow looks like:
This is a publicly generated endpoint that is accessible by default, every single submission will generate these URLs. Your applications will be instantly available to the end-users without the need to configure any virtual machine, container instance, or network firewalls. We host this endpoint *.lyr.id with Geo-distributed DNS on multiple clouds.
There's an option to turn this off during submission if you want to keep the endpoint hidden. Check out our documentation for further information about this: https://docs.lyrid.io
Custom Domain DNS Support
A custom domain can be mapped to the execution subdomain name. All the user has to do is to create a DNS record (either CNAME or A record) that maps their application DNS name into the subdomain name.
Once the name is mapped, at the first call, the platform automates the certificate request with LetsEncrypt CertBot APIs:
The issued certificate is stored internally within the platform and we apply account level at rest encryption to save and distribute these certificates within our platform.
In the future, we will support users to be able to upload their own certificates, but if you do require those capabilities, please let us know and we will be happy to support your own certificates.
Additional Support for Node.JS Server-Side Rendered (SSR) Platforms
Another thing that we have been working on over the past few months is adding new support for a Single-page Application (SPA) that is built using Server-Side Rendered (SSR) frameworks.
The rise and success of these frameworks for single-page applications is a natural progression to building web-native applications. Creating a scalable web application that has a high performance and search-engine-optimized is no longer something that requires long development cycles. And we definitely see the value of supporting these frameworks natively in our platform.
We choose first to support Next.JS inside our platform:
Next.js is an open-source React front-end development web framework created by Vercel that enables functionality such as server-side rendering and generating static websites for React-based web applications.
Supporting SSR frameworks require us to modify our Node.JS workflow to support a chain of commands to builds the SSR assets and saves them to be served inside an Express.JS application.
We did that following update into our own landing page lyrid.io:
Once we knew we can support one SSR framework, we tested the new workflow against all these frameworks and most of them can be naturally supported in our new SSR workflow such as:
We will write guides as well as tutorials for these, but in the meantime, check into our updated documentation on how to get your Next, Nuxt, Gatsby project running inside our platform as Lyrid Serverless Application.
Managed Environment Variables Supports
Within this period, we also introduced a way to securely set environment variables in the platform. This allows our users to not rely on a file to set their environment variables. Environment variables can be injected into the platform using our API and Web:
And it will be set at the deployment time of application into the appropriate platform. Users will also able to override values for the environment for each deployment.
AWS Lambda Defaults to Container Deployment
Back in December 2020, Amazon Web Services Lambda announces Container Image is publicly available:
We decided to take full advantage of this as soon as we can spend some engineering cycle. We were quite excited about this because building containers not only allows us to build a larger package (we hit the 50MB upload limit in a lot of instances), it also streamlines our build and deployment process as it fits into our workflow pipeline.
Along with environment and policy management for applications, we also implemented various improvements in the application user interface to improve the serverless application experience.
A new application overview that shows the detail of a serverless application. From the size of the submission to the size of the builds for each platform to the details on where it is being deployed.
Execution Domain Management
We also added a user interface for the user to add and attach their custom domain and previews their application:
We brought our console monitoring into the UI to be able to do monitor the console output of their applications:
And lastly, in our new code viewer, users will be able to check the running revision of the submission.
Tighter Deployment and Various Infrastructure Improvements
As for updates in our backend infrastructures:
- We updated our infrastructure deployment codes.
- Integrate and repacked our modules into our own Serverless Application.
- Utilize more of the Kubernetes Cluster automatic scaling in our backend platform.
Here’s the anatomy of how Lyrid is managing a Kubernetes cluster with our services installed inside them.
The tighter integrations into the services resulted in a massive performance increase for our average execution latency over time.
As shown in our latency metrics, we successfully brought back our services with reduced jittery-ness and improved the latency for almost every call into the platform upward of 80%.
These tighter Kubernetes integrations are a sneak peek into our ability to bring Lyrid on-premise solutions a step closer into your data center and harnessing the same capabilities of what we produced in the SaaS that is available at the moment.
The past few months have been quite a busy time for us and we have been expanding our reach and our partner networks to constantly improving the platform. Here are some of the focus for the product that we identified for the near future:
- Improving our onboarding and embedding tutorials
- Support for submission using Git URLs
- Native CRON Scheduling Event
We strive to be the most fluid and agnostic platform to deliver any applications. There’s so much potential of where we can go from here. And with that, you can register a Free Lyrid account to get started with the following link and try these all yourself:
Lastly, please do not be shy to contact me at email@example.com or join our Slack channel to say hi. Thank you for reading, stay tuned for more, updates, tutorials, and how-to guides!