Element 84 Logo

Exploring Serverless Portability

04.30.2019

With Functions as a Service (FaaS), cloud providers promise us the ability to deploy, run, and scale small pieces of application code without worrying about the underlying hardware or virtual machines. This comes with certain advantages, such as a pricing model that only charges for the amount of time the code is running, quick deployments, and less time spent managing infrastructure. There are also some downsides. Part of your infrastructure has become a black box run by your cloud provider, and there are risks of vendor lock in as serverless offerings differ between vendors.

There are some frameworks out there, such as The Serverless Framework that promise portability between different cloud providers. However, you still need to understand the differences between different FaaS offerings, since they differ in how they handle requests coming through an API gateway, or with how events are consumed. Let’s dive in and take a look at how we might run a function in different environments.

Sample Function

We’ll start off with an implementation of one of the most important algorithms in computer science–Hello World. The implementation below in Go is a Command Line Interface (CLI) version. It takes in a single command line parameter and outputs a greeting along with the current time.

package main

import (
    "fmt"
    "os"
    "time"
)

func main() {
    name := "world"
    if len(os.Args) >= 2 {
        name = os.Args[1]
    }
    greet(name)
}

func greet(name string) string {
    greeting := fmt.Sprintf("Hello %s, how are you doing at %s?", name, time.Now().Format("3:04PM"))
    fmt.Println(greeting)

    return greeting
}

The output of running this:

$ go run local/blog Kevin
Hello Kevin, how are you doing at 9:13AM?

The greet() function contains our important business logic, but for larger programs that would be in a separate package that is imported into this wrapper. Now that this function is working, let’s send it to production.

AWS Lambda

Lambda is the most mature of the FaaS offerings. It was first on the scene when it became generally available in 2015. Since then, AWS released the Serverless Application Model (SAM) which will read in a template, package your code, push it to S3, and output a CloudFormation template with all of the resources you need. Let’s update our function to work with Lambda and SAM:

package main

import (
    "fmt"
    "net/http"
    "time"

    "github.com/aws/aws-lambda-go/events"
    "github.com/aws/aws-lambda-go/lambda"
)

func HandleRequest(req events.APIGatewayProxyRequest) (events.APIGatewayProxyResponse, error) {
    name, ok := req.QueryStringParameters["name"]
    parameter := "World"

    if ok {
        parameter = name
    }

    greeting := greet(parameter)
    return events.APIGatewayProxyResponse{
        StatusCode: http.StatusOK,
        Body:       greeting,
    }, nil
}

func main() {
    lambda.Start(HandleRequest)
}

func greet(name string) string {
    greeting := fmt.Sprintf("Hello %s, how are you doing at %s?", name, time.Now().Format("3:04PM"))
    fmt.Println(greeting)

    return greeting
}

In this function we added two AWS specific imports to help with turning this into a Lambda function. The “events” import makes it easier to handle the incoming events from the API Gateway and parse the request parameters. Here is the SAM template:

AWSTemplateFormatVersion : '2010-09-09'
Transform: AWS::Serverless-2016-10-31

Resources:
 GreetingFunction:
   Type: AWS::Serverless::Function
   Properties:
     Runtime: go1.x
     Handler: blog-lambda
     Timeout: 10
     Events:
       RootHandler:
         Type: Api
         Properties:
           Path: '/greeting'
           Method: get

From this template we’ll get a CloudFormation template that will create a stack with the Lambda function and an API Gateway. Here’s how we deploy:

$ GOARCH=amd64 GOOS=linux go build local/blog-lambda
$ sam package --template-file lambda.yaml --s3-bucket <redacted> --output-template-file lambda-packaged.yaml
Uploading to acc15f238dfd2ba7d5c1f06615a4f92f  9100692 / 9100692.0  (100.00%)
Successfully packaged artifacts and wrote output template to file lambda-packaged.yaml.
Execute the following command to deploy the packaged template
aws cloudformation deploy --template-file <redacted>/lambda-packaged.yaml --stack-name <YOUR STACK NAME>
$ aws cloudformation deploy --template-file <redacted>/lambda-packaged.yaml --stack-name blog-lambda --capabilities CAPABILITY_IAM

Waiting for changeset to be created..
Waiting for stack create/update to complete
Successfully created/updated stack - blog-lambda
$ curl https://<redacted>.us-east-1.amazonaws.com/Prod/greeting?name=Kevin
Hello Kevin, how are you doing at 1:57PM?

Google Cloud Functions

A relative newcomer on the FaaS scene is Google Cloud Functions, which just became generally available in 2018. Here’s how we can modify our original code to work with Cloud Functions:

package greeting

import (
    "fmt"
    "log"
    "net/http"
    "time"
)

func Greeting(writer http.ResponseWriter, request *http.Request) {
    name, ok := request.URL.Query()["name"]
    parameter := "World"

    if ok && len(name[0]) >= 1 {
        parameter = name[0]
    }

    greeting := greet(parameter)
    log.Println(request.URL.Query())
    fmt.Fprintf(writer, greeting)
}

func greet(name string) string {
    greeting := fmt.Sprintf("Hello %s, how are you doing at %s?", name, time.Now().Format("3:04PM"))
    fmt.Println(greeting)

    return greeting
}

This uses the standard net/http package and exports the Greeting() function to handle any http requests that come in. Here’s how we can publish it:

$ gcloud functions deploy Greeting --runtime go111 --trigger-http
Deploying function (may take a while - up to 2 minutes)...done.
availableMemoryMb: 256
entryPoint: Greeting
httpsTrigger:
  url: https://<redacted>.cloudfunctions.net/Greeting
labels:
  deployment-tool: cli-gcloud
name: projects/<redacted>/locations/us-central1/functions/Greeting
runtime: go111
serviceAccountEmail: <redacted>@appspot.gserviceaccount.com
sourceUploadUrl: https://storage.googleapis.com/<redacted>
status: ACTIVE
timeout: 60s
updateTime: '2019-04-19T14:51:58Z'
versionId: '2'
$ curl https://<redacted>.cloudfunctions.net/Greeting?name=Kevin
Hello Kevin, how are you doing at 2:53PM?

Notice with Cloud Functions it will take the code and compile it for us, and if we pass in the --trigger-http flag it will create the https endpoint automatically.

Google Cloud Run

Google’s latest serverless compute offering is Google Cloud Run, which was announced in April 2019 and is currently still in beta. While not really FaaS (it runs anything that you can put in a Docker container), it has some of the features of Lambda and Cloud Functions, such as built in event handling. The interesting thing about Cloud Run is that it’s based on an open standard called Knative which runs on top of Kubernetes. The code needs to be packaged inside of a Docker container, and then Knative handles the autoscaling (down to 0 instances if the service hasn’t been called recently), and also passes in events and http requests. Here’s how our code looks for Cloud Run:

package main

import (
    "fmt"
    "log"
    "net/http"
    "os"
    "time"
)

func main() {
    log.Print("Starting...")

    http.HandleFunc("/", handler)

    port := os.Getenv("PORT")
    if port == "" {
        port = "8080"
    }

    log.Fatal(http.ListenAndServe(fmt.Sprintf(":%s", port), nil))
}

func handler(writer http.ResponseWriter, request *http.Request) {
    name, ok := request.URL.Query()["name"]
    parameter := "World"

    if ok && len(name[0]) >= 1 {
        parameter = name[0]
    }

    greeting := greet(parameter)
    fmt.Fprintf(writer, greeting)
}

func greet(name string) string {
    greeting := fmt.Sprintf("Hello %s, how are you doing at %s?", name, time.Now().Format("3:04PM"))
    fmt.Println(greeting)

    return greeting
}

This code will run locally as well and listen on port 8080. Here’s the Dockerfile:

FROM golang:1.12 as builder

WORKDIR /go/src/local/blog-run
COPY . .

RUN CGO_ENABLED=0 GOOS=linux go build -v -o blog-run

FROM alpine
COPY --from=builder /go/src/local/blog-run/blog-run /blog-run

EXPOSE 8080

CMD ["/blog-run"]

And then build and deploy it:

$ gcloud builds submit --tag gcr.io/test-project-186401/greeting
…(Docker build output)...
$ gcloud beta run deploy greeting --image gcr.io/<redacted>/greeting --allow-unauthenticated
Deploying container to Cloud Run service [greeting] in project [<redacted>] region [us-central1]
✓ Deploying... Done.
  ✓ Creating Revision...
  ✓ Routing traffic...
  ✓ Setting IAM Policy...
Done.
Service [greeting] revision [greeting-00002] has been deployed and is serving traffic at https://<redacted>.a.run.app
$ curl https://<redacted>.a.run.app?name=Kevin
Hello Kevin, how are you doing at 3:23PM?

Like with Cloud Functions, we are able to get an https URL where we can hit the service without the need to set up any additional infrastructure. It will also scale the number of containers and balance traffic between them.

Knative

For those that like more servers with their serverless, the Knative project provides a set of components that run on top of Kubernetes that handle all of the deployments, autoscaling, and event management of a serverless framework. There are a few reasons why this could be useful:

  • For organizations migrating from a data center to the cloud, they can use the same technologies both places.
  • Ability to run the same services on different cloud providers.
  • Some organizations have security policies that require them to manage their own servers.
  • The need for tighter control over the network where the service is running.
  • As this standard gets adopted by other cloud providers there will be a standardization of serverless platforms.

Here’s how we can create a run our own Kubernetes cluster on Google Cloud with Knative. This cluster doesn’t have to be on Google cloud, we could run it on any cloud provider or on premises if we have Knative and Istio installed.

$ gcloud beta container clusters create test-cluster-1 \
>   --addons=HorizontalPodAutoscaling,HttpLoadBalancing,Istio,CloudRun \
>   --machine-type=n1-standard-4 \
>   --cluster-version=1.11.8-gke.6 --zone=us-central1-a \
>   --enable-stackdriver-kubernetes --enable-ip-alias \
>   --scopes cloud-platform

Wait for the cluster to come up…

$ kubectl get pods --namespace=knative-serving
NAME                                   READY     STATUS    RESTARTS   AGE
activator-6f7d494f55-fsnhz             1/1       Running   0          3m
autoscaler-5cb4d56d69-lppxt            1/1       Running   0          3m
cloudrun-controller-6975bb5c8c-bp74t   1/1       Running   0          3m
controller-6d65444c78-975ld            1/1       Running   0          3m
webhook-55f88654fb-sz9pt               1/1       Running   0          3m

And now we can see Knative running on the cluster. Google Cloud Run will hook directly into that Kubernetes cluster, and we can deploy our function like this:

$ gcloud beta run deploy greeting --image gcr.io/<redacted>/greeting --region=us-central1 --cluster=test-cluster-1
Deploying container to Cloud Run on GKE service [greeting] in namespace [default] of cluster [test-cluster-1]
✓ Deploying new service... Done.
  ✓ Creating Revision...
  - Routing traffic...
Done.
Service [greeting] revision [greeting-98hbw] has been deployed and is serving traffic at greeting.default.example.com

Istio is a service mesh framework that runs on top of Kubernetes and handles things such as monitoring, service authentication, and load balancing. In order to call our Knative function, we need to find the public IP of the Istio load balancer then call the function:

$ kubectl get svc istio-ingressgateway -n istio-system
NAME                   TYPE           CLUSTER-IP     EXTERNAL-IP    PORT(S)                                                                                                                   AGE
istio-ingressgateway   LoadBalancer   10.106.14.83   <redacted>   80:31213/TCP,443:32116/TCP,31400:30546/TCP,15011:30336/TCP,8060:31376/TCP,853:32759/TCP,15030:31909/TCP,15031:32151/TCP   20m
$ curl -H "Host: greeting.default.example.com" http://<redacted>?name=Kevin
Hello Kevin, how are you doing at 4:29PM?

It’ll take about 30 seconds as the container starts. This is Knative scaling from 0 to 1 instance. We can now see it running in our cluster:

$ kubectl get pods
NAME                                        READY     STATUS    RESTARTS   AGE
greeting-98hbw-deployment-6658876db-l5tqh   3/3       Running   0          2m

Wait a few minutes, and Knative will spin down to 0 instances:

$ kubectl get pods
No resources found.

Summary

There are several options for running code without the need to manage servers. These solutions are very vendor specific, but this doesn’t mean you’ll have complete vendor lock-in. Well organized code can be portable with a few changes, or by leveraging a framework that abstracts away those details. Knative offers an open source solution for running serverless applications on top of Kubernetes. Google is launching a service (Google Cloud Run) based on Knative and is throwing their weight behind the project. While still in beta, Knative could prove to be the standard that serverless computing needs.