How I by accident constructed a serverless software – IBM Developer


As a developer advocate, one of many largest challenges I face is how you can train folks to make use of our firm’s merchandise. To do that nicely, it is advisable to create workshops and disposable environments so your college students can get their palms on the precise expertise. As an IBM worker, I exploit the IBM Cloud, however it’s designed for long-term manufacturing utilization, not the ephemeral infrastructures {that a} workshop requires.

We frequently create methods to work across the limitations. Not too long ago in updating the deployment means of such a system, I spotted I had created a full serverless stack — utterly by chance. This weblog submit particulars how I by accident constructed an automatic serverless automation and introduces you to the expertise I used.

Enabling automation with Schematics

Earlier than describing the serverless software, I’m going to pivot and discuss a function of IBM Cloud that most individuals don’t learn about. It’s known as IBM Cloud Schematics, and it’s a gem of our cloud. Right here’s an outline of the device:

Automate your IBM Cloud infrastructure, service, and software stack throughout cloud environments. Oversee all of the ensuing jobs in a single area.

And it’s true! Mainly, it’s a wrapper round Terraform and Ansible, so you possibly can retailer your infrastructure state in IBM Cloud and put actual RBAC in-front of it. You may leverage the cloud’s Id and Entry Administration (IAM) system and built-in permissions. This removes the tedium of coping with Terraform state information and offers infrastructure groups the flexibility to solely deal with the declaration code.

Why I constructed this serverless software

This brings me to utilizing this software on our cloud. For workshops and demos, I used to be instructed that I needed to transfer away from “traditional” clusters and transfer to digital personal clouds (VPCs). There’s a bunch of Terraform code floating round so I discovered some and edited it right into a VPC, linked it to shared object storage, and added all of the clusters wanted for a workshop into that very same VPC. The outcomes is that now each workshop is a VPC, giving members their very own IP area and walled backyard of assets. It is a large win for us.

Right here’s a have a look at the circulation of how the appliance interacted with Schematics to create these VPCs:

overall image

The request course of

  1. Somebody enters a GitHub Enterprise subject on a particular repository.
  2. The GitHub Subject validator receives a webhook from GitHub Enterprise and parses the problem for the completely different choices. It additionally checks for any attainable choices that might be extra then allowed, or the right formatting of the particular subject. If every little thing is accepted, the validator tags the problem with scheduled to realize it’s able to be created.
  3. The cron-issue-tracker polls towards the problems each 15 minutes with “scheduled” tag.
  4. If it’s inside 24 hours of the beginning time, the API calls the grant-cluster-api and requests creation of grant-cluster software.
  5. It calls both the traditional or VPC Code Engine APIs to spin up the required clusters by way of the /create API endpoint.
  6. If it’s a traditional request, it’s going to name the AWX backend. or VPC request, If the request is a VPC request, it’s going to name the Schematics backend to request the clusters.
  7. When the cron-issue-tracker reads 24 hours after the “finish time” it removes the grant-cluster software and destroys the clusters by way of the /delete API endpoint.

Software description


I used the vpc-gen2-openshift-request-api: A flask API to run a code-engine job as the place to begin of the serverless software. I found that, after giving a bunch of Terraform code to Schematics, the subsequent pure step was to determine a solution to set off the request by way of an API. That is the place the IBM Code Engine platform comes into play.

In the event you view the GitHub repo above, you’ll see that our Schematics request is wrapped as a Code Engine job (line 21 in Due to that, all I needed to do was curl a JSON knowledge string to our /create endpoint and it kicked it off. Now I had the flexibility to run one thing like:

curl -X POST https://code_engine_url/create -H 'Content material-Sort: software/json' -d '{"APIKEY": "BLAH", "WORKSPACE": "BLAH2", "GHEKEY": "FakeKEY", "COUNTNUMBER": 10}'

This enabled us to determine how you can get requests shipped to the API.


The second core a part of this mission was to validate the GitHub Enterprise subject. With the assistance of Steve Martinelli, I took an IBM Cloud Capabilities software he created to parse a regular GitHub subject and pulled out choices from it.

As an illustration, the request provides you these choices to fill out:

• e mail:
• occasion brief identify: openshift-workshop
• begin time: 2021-10-02 15:00
• finish time: 2021-10-02 18:00
• clusters: 25
• cluster sort: OpenShift
• employees: 3
• employee sort: b3c.4x16
• area: us-south

This Cloud Perform receives on a webhook from GitHub Enterprise on any creation or edit of the problem and checks it towards some parameters I set. As an illustration, I set a parameter that there needed to be fewer than 75 clusters and the beginning and finish occasions need to be formatted in a particular manner and be inside 72 hours of one another. If a operate doesn’t match my parameters, the appliance feedback on the problem and asks the submitter to replace the problem.

If every little thing is parsed appropriately, the validator provides the tag of scheduled to the problem so our subsequent software can take possession of it.


As I created this microservice, I spotted I had a full serverless software brewing. After some deeper analysis into Code Engine, I found that there was a cron system constructed into the expertise. So, now that I can parse the problems with webhooks, I can take that very same framework and create a cron that checks the beginning and finish time and do one thing for us. This freed me as much as transfer away from having to schedule the time for one among us to spin up the required methods. Utilizing the cURL to our vpc-gen2-request-api gave me my clusters at an affordable time.

I additionally wanted a system to take a look at the clusters, and that’s the place the ultimate microservices got here into play.


The grant-cluster-api microservice accomplished my software puzzle. This microservices is a Code Engine job that spun up a serverless software with all of the required settings parsed from the GitHub subject routinely 24 hours earlier than the beginning time, and 24 hours after the top time. It additionally modified the tags and labels on the problem so now the cron-issue-tracker knew what to do when it walked by way of the repository.


As you possibly can see from the diagram, this software consists of a bunch of small APIs and features that do the work of a full software. Customers have one and just one interface into the stack and the GitHub Subject. When every little thing is about up appropriately, the bots do the work for us. I’ve parts that I can prolong off sooner or later, however every little thing relies off that first flask software once I realized all you needed to do was ship a JSON blob of knowledge and now you possibly can request precisely what you want.


Leave a Reply

Your email address will not be published. Required fields are marked *