· career  · 7 min read

Building a Serverless Application with JavaScript: A Case Study

A practical case study showing how to build, test, deploy, and operate a JavaScript-based serverless application using AWS Lambda and Azure Functions. Covers architecture, tooling, CI/CD, costs, monitoring, and lessons learned.

A practical case study showing how to build, test, deploy, and operate a JavaScript-based serverless application using AWS Lambda and Azure Functions. Covers architecture, tooling, CI/CD, costs, monitoring, and lessons learned.

Introduction

Serverless architectures let developers focus on application code instead of infrastructure. In this case study we’ll build a small but realistic JavaScript serverless application, examine choices between AWS Lambda and Azure Functions, show key implementation patterns, and surface the advantages and trade-offs encountered during development.

The goal: implement an API that ingests user-submitted JSON payloads, validates and enriches them, stores results in a database, and pushes notifications to subscribers. We’ll implement this using Node.js, deploy to AWS Lambda (primary) and show how the same approach maps to Azure Functions.

Project overview

Requirements:

  • HTTP endpoints to accept JSON payloads (REST API)
  • Input validation and enrichment (e.g., geo-IP lookup)
  • Persist to managed datastore (DynamoDB / Cosmos DB)
  • Publish events (SNS / Event Grid)
  • Observability (logs, traces, metrics)
  • Automated testing and CI/CD

Non-functional requirements:

  • Low operational overhead
  • Auto-scaling to handle bursts
  • Predictable per-request latency

Architecture decisions

High level architecture:

  • API gateway (AWS API Gateway / Azure API Management) fronting serverless functions
  • Lambda / Function App running Node.js handlers
  • DynamoDB (AWS) / Cosmos DB (Azure) for persistence
  • SNS / Event Grid for pub/sub notifications
  • CloudWatch + X-Ray (AWS) / Application Insights (Azure) for observability

Why serverless?

  • No VM provisioning or autoscaling rules to manage
  • Pay-per-use billing fits bursty workloads
  • Fast iteration loop for JavaScript developers

Trade-offs:

  • Cold starts can increase latency for infrequent functions
  • Vendor lock-in around managed services and APIs
  • Debugging distributed systems is more complex than monoliths

Choosing a provider and framework

For this case study we chose AWS as the primary cloud, and implemented a small port to Azure to demonstrate portability. We used the Serverless Framework for deployment orchestration because it supports multiple providers and simplifies resource definitions. Alternatives: AWS SAM, Terraform, or Azure Resource Manager (ARM) / Bicep for Azure.

References:

Implementing the Lambda (Node.js)

Project structure (simplified):

serverless-app/
  ├─ handler.js
  ├─ serverless.yml
  ├─ package.json
  ├─ tests/
  └─ lib/

Example handler (handler.js):

// handler.js
const AWS = require('aws-sdk');
const uuid = require('uuid');
const dynamo = new AWS.DynamoDB.DocumentClient();

exports.submit = async event => {
  const body = JSON.parse(event.body || '{}');
  // Basic validation
  if (!body.userId || !body.payload) {
    return {
      statusCode: 400,
      body: JSON.stringify({ error: 'Invalid request' }),
    };
  }

  // Enrichment (example: add server timestamp)
  const item = {
    id: uuid.v4(),
    userId: body.userId,
    payload: body.payload,
    createdAt: new Date().toISOString(),
  };

  await dynamo.put({ TableName: process.env.TABLE_NAME, Item: item }).promise();

  // Publish an event (simplified)
  // const sns = new AWS.SNS();
  // await sns.publish({ TopicArn: process.env.TOPIC_ARN, Message: JSON.stringify(item) }).promise();

  return { statusCode: 201, body: JSON.stringify(item) };
};

serverless.yml (snippet):

service: serverless-app
provider:
  name: aws
  runtime: nodejs18.x
  environment:
    TABLE_NAME: ${self:service}-table

functions:
  submit:
    handler: handler.submit
    events:
      - http:
          path: submit
          method: post

resources:
  Resources:
    Table:
      Type: AWS::DynamoDB::Table
      Properties:
        TableName: ${self:provider.environment.TABLE_NAME}
        AttributeDefinitions:
          - AttributeName: id
            AttributeType: S
        KeySchema:
          - AttributeName: id
            KeyType: HASH
        BillingMode: PAY_PER_REQUEST

Notes:

  • We used environment variables injected by the framework for table names and ARNs.
  • Prefer the AWS SDK v2 or v3 depending on dependency size. v3 has modular imports which can reduce bundle size.

Mapping to Azure Functions

Azure Functions handler (JavaScript) looks similar but uses different bindings. Example function.json + index.js:

function.json:

{
  "bindings": [
    {
      "authLevel": "function",
      "type": "httpTrigger",
      "direction": "in",
      "name": "req",
      "methods": ["post"]
    },
    { "type": "http", "direction": "out", "name": "res" }
  ]
}

index.js:

module.exports = async function (context, req) {
  const body = req.body || {};
  if (!body.userId || !body.payload) {
    context.res = { status: 400, body: { error: 'Invalid request' } };
    return;
  }
  // Persist to Cosmos DB using the SDK
  context.res = {
    status: 201,
    body: {
      /*...*/
    },
  };
};

Azure Functions can be deployed with the Azure Functions Core Tools or with CI/CD pipelines via GitHub Actions or Azure DevOps. See https://learn.microsoft.com/azure/azure-functions/functions-develop-local for local development.

Local development and testing

Techniques we used:

  • Local emulators: DynamoDB Local for AWS, Cosmos DB Emulator for Azure
  • unit tests with jest and mocked SDK clients
  • integration tests against a local stack or ephemeral cloud resources
  • run functions locally with serverless offline (Serverless Framework) or func start (Azure Functions Core Tools)

Example jest test (handler.test.js):

const { submit } = require('../handler');

test('returns 400 on invalid request', async () => {
  const response = await submit({ body: '{}' });
  expect(response.statusCode).toBe(400);
});

Best practices:

  • Keep logic small per function for easy testing
  • Extract business logic into pure functions in lib/ so tests don’t need cloud clients

CI/CD and deployment

We used GitHub Actions for CI/CD with these steps:

  1. Run unit tests and linting
  2. Build (bundle dependencies)
  3. Deploy to a staging environment using Serverless Framework
  4. Run smoke tests against staging
  5. Promote to production with manual approval

Example deploy step (GitHub Actions):

- name: Deploy to AWS
  run: npx serverless deploy --stage ${{ github.ref == 'refs/heads/main' && 'prod' || 'staging' }}
  env:
    AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
    AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}

For Azure we used the Azure CLI login action and func azure functionapp publish for deployment.

Observability: logs, traces, and metrics

Observability is critical in a serverless landscape:

  • Logs: CloudWatch Logs (AWS) / Application Insights logs (Azure)
  • Tracing: AWS X-Ray / OpenTelemetry / Application Insights distributed tracing
  • Metrics: built-in metrics for invocations, duration, and errors

We added structured logging (JSON) and included request IDs in logs so traces could be correlated. Use async hooks or frameworks that propagate context to correlate logs with traces.

Resources:

Performance and scaling considerations

  • Cold starts: mitigate by keeping packages small, using provisioned concurrency for critical endpoints, or using lightweight runtimes (Node.js 18+ with snapshotting helps).
  • Concurrent limits: AWS has account-level concurrency limits - request increases if you expect high concurrency.
  • Throttling: ensure downstream services (DB or external APIs) can cope; add retry/backoff patterns and circuit breakers.

Cost considerations

Serverless billing is attractive but can grow with heavy or long-running workloads.

  • Pay-per-invocation plus duration (Lambda) - optimize function duration (smaller packages, efficient code)
  • Optimize memory allocation: allocate the minimum memory that still yields acceptable latency; AWS charges by memory-seconds.
  • Use provisioned concurrency for stable high-traffic endpoints (reduces cold starts but adds cost)

Monitor costs in CloudWatch or Azure Cost Management and set alerts for spikes.

Security

Key points implemented:

  • Principle of least privilege: IAM roles scoped to the smallest set of resources
  • Environment variables stored securely (use Secrets Manager or Azure Key Vault for sensitive data)
  • Validate and sanitize inputs to prevent injection attacks
  • Use API Gateway authorizers or Function-level authentication (JWT/OAuth) for protected endpoints

AWS docs on least privilege: https://aws.amazon.com/iam/

Challenges encountered and how we solved them

  1. Cold-start latency for low-traffic endpoints

    • Solution: provisioned concurrency for critical endpoints and reduce package size (use tree-shaking and modular AWS SDK).
  2. Local testing drift vs cloud behavior

    • Solution: use a hybrid approach-local unit tests + ephemeral cloud-based integration tests run in CI.
  3. Debugging asynchronous workflows (events, retries)

    • Solution: structured logs and tracing (X-Ray/Application Insights), and add dead-letter queues for failed events.
  4. Managing IAM complexity

    • Solution: generate least-privilege roles in templates and use staged deployments (dev/staging/prod) to validate policies.

Lessons learned and best practices

  • Design for idempotency: functions may be retried, ensure handlers are idempotent.
  • Keep functions small and single-purpose for easier testing and faster cold starts.
  • Use feature flags and blue/green (or canary) deployments for risk mitigation.
  • Keep local dev loops fast: mock expensive services during unit tests; reserve integration testing for CI.
  • Invest early in observability: logs + traces + metrics save hours during incidents.

Advantages recap

  • Faster time-to-market due to reduced ops overhead
  • Automatic scaling for bursty traffic
  • Cost-effective for workloads with variable usage

Disadvantages recap

  • Cold start concerns
  • Potential vendor lock-in and different APIs across clouds
  • Increased complexity in distributed-system debugging

Conclusion

Building a serverless application with JavaScript is highly productive for many use cases. In this case study we implemented a simple ingest-and-store pipeline with AWS Lambda (and showed how to map it to Azure Functions), and covered the patterns and trade-offs encountered. The most important non-technical takeaway: invest in observability, testing, and CI/CD - these reduce friction and make serverless systems reliable and maintainable.

References and further reading

Back to Blog

Related Posts

View All Posts »
The Future of JavaScript: Salary Trends for 2025

The Future of JavaScript: Salary Trends for 2025

A data-informed look at how JavaScript developer compensation is likely to change in 2025 - projected salary ranges, the technologies that will push pay higher, geographic effects, and practical steps engineers can take to capture the upside.