My name is Handoyo Sutanto and I am the co-founder and creator of Lyrid. Today has only been the 2nd week of December of 2020 and there are already so many announcements from different aspects of our lives. Ranging from COVID-19 vaccine news, to AirBnb, and Doordash IPOs. But on top of all the big news, we have bigger news of our own! This month is where we decided to publicly start allowing access to our platform without any restrictions!
Yes, really! You can register to our platform and deploy your own serverless applications in less than 5 minutes without any credit card requirements!
Firstly…What is Lyrid?
We are a collection of tools and frameworks that are put together as a SaaS platform. You will be able to use these to help build your cloud-native serverless application that can run on multiple, serverless public cloud platforms (such as AWS Lambda or Google Cloud Run).
Lyrid is first and foremost a platform provider for anyone to run their application servers. We create a seamless experience for anyone to develop their idea as efficient and frictionless as possible without compromising security, inter-operationality, and scalability. Multi-cloud serverless compute is one of the tools to get there.
We perform all the build and deployment process for different cloud platforms for our users. So, they can just focus on what is important for them, which is, their business. We mean it when we say, they should just focus on their business and operations, not figuring out things like:
- Which cloud platform to use?
- Where they need to deploy their services?
- How much resource they will need? CPU? RAM? How many instances? And where?
We achieve this by managing and driving your cloud build, deployment and executions using policies, and building more intelligence into the platform on how/where/when your application should be built, distributed and executed. We also deploy and execute your functions using our Universal API Gateway.
Understanding The Motivation
Prior to Lyrid, my background for the past 15 years was working on enterprise infrastructure and distributed system software. The workload varied from software that distributes media, transcoding jobs to building scalable storage, and, compute and networking subsystem using various technologies.
I love experimenting with new technologies, but the more I wrote and deployed my code in the cloud, the more I started to realize something.
Reducing Repetition and Guesswork
That is…most of the things we (Cloud Engineers and DevOps) do is actually repetition. We like to think that what we do is so revolutionary, but in most cases are just a combination of your muscle memory (repetition) and guesswork. The challenge of knowing exactly what kind of resources and the amount needed at a given point in time is an ongoing battle for us. There are tons of tools/analytics/monitoring solution software that revolves around these problems.
Here’s what you feel like doing when you guess wrong:
We are not condoning property destruction, but boy I can really relate to them!
I am sure we can do better than this…It was about 4 years ago that I got my first exposure to serverless by AWS Lambda, and was immediately drawn into the technology. Especially, on how it can remove almost all of the guesswork (at least on the question of “how many instances do I need?”). The baked-in efficiency that is inherited from pay-per-execution cost structure is just something that I had never seen before!
Clearing Misconceptions about Serverless Technology
However, the benefits that I mentioned above does comes with a cost, almost every managed serverless platform out there (AWS Lambda, Google Cloud Function, Azure Function, etc.) needs to confirm with the cloud’s specifications on how to “run” your function. Case and point, check out these different entry points for your function input/output that the user has to put together in order to make their code work inside the platform:
And because of that, serverless is probably the most misunderstood technology in the past 4 years. The amount of conflicting information floating around the internet is toxic for the growth of the technology itself. Here are some of the information titbits that I found from my personal experience with some of my colleagues that resist deploying a serverless solution:
- How tied it is to the platform dependency. Which means that you can only fully benefit from serverless only if you put all your workload in the same platform.
- Which leads to: investing in making something into serverless will pretty much lock you into the platform further.
- Along with non-upfront and complex pricing calculations. The scale to zero and pay as you scale is actually a double-edged sword to the adoption. There is actually some comfort about knowing what the upfront cost is. Check out this recent blog by written by Sudeep at Milkie Way, Inc. on how it can get out of control: https://medium.com/milkie-way/we-burnt-72k-testing-firebase-cloud-run-and-almost-went-bankrupt-part-1-703bb3052fae
- And lastly, there is still no clear understanding on what can or cannot run as serverless.
And here are some examples of the conflicting information about serverless that confuses most people:
- Why do I need to learn about the platform? Isn’t the point of serverless is to focus on your business logic?
- Why do I really still have to know how big I should scale my machine? Isn’t the platform supposed to take care of that?
- Why do I really still have to know where it is deployed? Isn’t the platform supposed to take care of that as well?
One more misconception about the technology is an association that people make. We also want to create this new way of thinking that AWS Lambda != Serverless. AWS Lambda is a compute platform that runs a serverless workload.
They were the one that coined that term so a lot of people associate them together, and this association is a hinderance to the adoption.
We will get into more detail about what we think to be the common misconceptions about serverless on the next upcoming blogs.
Don’t get us wrong…we are actually strong believer in serverless! And we believe everyone should write more of them! So I had a vision of what serverless should be. With that I start this company, and our views are that serverless is a piece of technology and a philosophy on how a developer should build an application:
- Never have any infrastructure to manage from the end-user side. No infrastructure is needed to build and distribute your application. Not even containers are required. Just build your code on your machine.
- Standardized input/output interfaces regardless where it is run.
- Universally accessible tools, platforms and frameworks that can be used and scaled by anyone from college students, scrappy start-ups, to Fortune 500 companies.
- Focus on your code! Optimize there! Really…and not in how it is being built, distributed, deployed, and executed. Those should be the platform’s job to optimize.
- The platform itself should self-expand, shrink, and auto-load balance with geographical awareness in the background with the ability to scale to zero if there are no requests. Thus, further reducing any guess work.
- Privacy and security needs to be baked into the platform and users will be able to harden their deployment as much as they need to. The platform should always facilitate this without getting in the way.
Building Platform For Everyone
This is pretty much the driving force that makes us wants to create Lyrid.
At the end, what we want to create is a platform for everyone to test and build their ideas rapidly while still keeping all the best cloud practices built in.
We believe in the power of serverless that given the right tools built around it, that we can make your application run on serverless on any given platform.
We see an opportunity here where we can provide the ecosystem of tools necessary for running applications as efficiently as possible (using serverless on any cloud), thus removing all the guesswork and excess as much as possible.
How does it work?
The bulk of what our platform does, runs after the user submits their application into our platform. Our command line client at this point will zip and upload your code to be submitted into our pipeline.
At the code submission, our platform will wrap the users application with a lightweight wrapper that makes your function run on different cloud vendors that we support.
This wrapped code will be stored inside our infrastructure to be built inside our platform. Then we will execute recipes depending on your current programming language, web server framework, and the target cloud infrastructure. The output of these builds are build artifacts that can be directly deployed into the appropriate cloud vendors.
We will then use those builds to run our deployment process. During the deployment process, our platform will determine how your function application will be distributed globally based on the policies.
This allows the application or service creator the flexibility to determine what is important for their application. Some users need deployment policies that are able to run their compute as close as possible to the end-users while others need to have something that can utilize all the public cloud resources as much as possible.
For execution, your request will be routed to our closest managed region (currently we are managing 3 major regions: US West, South-East Asia and Europe Central) and executed based on the policy that you set globally on your account or at the application level.
Read more about it here in our Universal HTTP API Gateway Documentation.
What is next for us?
The mark of good and successful platform companies are its ability to let their users experience the power of the platform and successfully repeat them with consistency.
And with that in mind, we are building an ecosystem with the platform. An ecosystem of reusable components that is built using our serverless-first mindset.
We are also looking to expand our pricing, and build more cost predictability and consistency using analytic data for all tiers of our users that are looking to burst up and down in more efficient way.
Here are some topics that I will be covering, along with some examples of the components that we have successfully integrated or use inside our platform along with some projects from our early adopters:
- We will dive into more details about Lyrid and its components that we built to create this platform.
- We will walk you through an experience of building a full application using an example of our authentication service endpoint:(https://id.lyrid.io). This service was previously hosted on Auth0 and prior to release but we decided to host this ourselves using our own technology and create a fork from this Django project: Django GraphQL Auth, and put our serverless spin to build the component. We will show you the steps for how we have successfully migrated from Auth0 into this.
- Learn how we package our Docusaurus documentation static files and serve them as a serverless web server function using Golang 1.x and Gin.
- Integrated our build and deployment process using GitLab CI/CD and our tools to continuously update our services.
- Our distributed analytics and metrics collections for all our infrastructure monitoring are based of our own serverless function built with Golang 1.x that extend the metric endpoints and service discovery of Prometheus. This is the backend of how we presented this graph in our dashboard:
- And many more topics…
We will share these processes in future blogs and publish more guides on how to build different things using a mixture of open-source materials hosted on our GitHub page at https://github.com/LyridInc, blog, and video content. It will take us some time to create these contents and if you are interested to know about one of the above topics sooner, just contact us at email@example.com!
As for what we are looking for:
- Feedback: So far, all the features that we have and built have been coming from all the feedback from our users. This helps us prioritize what we are working on in order to build this platform that everyone can benefit from. We want to be able to support more languages, and more native serverless platforms. Join our Slack channel to engage with us, tell us what you want to see in the platform and how you want us to build it.
- Community Supporters and Contributors: Internal contributor to build on the platform or external contributors to build for the ecosystem (or both). We are not quite open-source end-to-end, but we are not shy about sharing our technology to the world.
- Strategic Partnerships and/or Investors: We are looking to grow and we will need help doing it. I am one of those people that values your time and effort more than anything else in the world.
We want to be more available all across the world and expand our network and ecosystem to include as much support for every cloud vendor, web framework, and programming language!
With that said and done, you can register a Free Lyrid account to get started with this following link:
Lastly, please don’t be shy to contact us firstname.lastname@example.org or join our Slack channel to say hi. Let us know what you want to build with our platform!