Local Orchestration and Docker Compose
Cloud Computing
MSc in Computer Engineering · Mobile Computing

Lab Worksheet 3

Date: 10/03/2026  ·  2025/26 2nd Semester

Go from a single container to a microservice ecosystem, composed by two services that communicate with each other.

Prerequisites: Visual Studio Code Docker Node.js
1

Upgrade the Application — Implement Caching

Before we create the Redis database service, we will upgrade the application by adding a cache mechanism.

  1. Open your terminal, create a new directory for today's lab, and navigate into it:

    bash
    mkdir lab03 && cd lab03
  2. Initialize a new Node project and install Express and Redis:

    bash
    npm init -y
    npm install express redis --save
  3. Create a file named server.js and paste the following code into it:

    javascript · server.js
    const express = require('express');
    const redis = require('redis');
    
    const app = express();
    const PORT = process.env.PORT || 3000;
    const REDIS_URL = process.env.REDIS_URL || 'redis://localhost:6379';
    
    const client = redis.createClient({ url: REDIS_URL });
    client.on('error', (err) => console.error('Redis Client Error', err));
    
    app.get('/api/data', async (req, res) => {
      try {
        if (!client.isOpen) await client.connect();
    
        const cachedData = await client.get('my_cache_key');
        if (cachedData) {
          return res.json({ source: 'Redis Cache (Fast)', data: JSON.parse(cachedData) });
        }
    
        // Simulate a slow database/API call (2 seconds)
        const freshData = { message: "Hello from the Cloud!", timestamp: Date.now() };
        await new Promise(resolve => setTimeout(resolve, 2000));
    
        await client.setEx('my_cache_key', 30, JSON.stringify(freshData));
    
        res.json({ source: 'Backend Computation (Slow)', data: freshData });
      } catch (error) {
        res.status(500).json({ error: error.message });
      }
    });
    
    app.listen(PORT, () => {
      console.log(`Server is running on port ${PORT}`);
    });
Note

We cannot test the application yet as the Redis database is not running.

2

The Multi-Stage Dockerfile

In this step you should apply the concepts learned from the previous class to create a multi-stage Dockerfile to build the API image.

3

Writing the Docker Compose File

Instead of running each Docker run command individually and every time, with complex parameters for environment variables, volume mounting and networking, we will introduce Docker Compose.

  1. First, create a file named docker-compose.yml.

  2. Then, add the following code:

    yaml · docker-compose.yml
    services:
      api-service:
        build: .
        ports:
          - "8080:3000"
        environment:
          - REDIS_URL=redis://cache-db:6379
        depends_on:
          - cache-db
    
      cache-db:
        image: redis:7-alpine
        ports:
          - "6379:6379"
        volumes:
          - redis-data:/data
    
    volumes:
      redis-data:
?
Question

How does the API find the Redis database on the network, as we did not input its IP address?

4

Orchestration and Testing

  1. Build and boot the entire application in detached mode:

    bash
    docker compose up -d --build
  2. Check the status of the microservices:

    bash
    docker compose ps
  3. Now test the caching mechanism by making a request to the endpoint http://localhost:8080/api/data:

    • On the first attempt, it should take 2 seconds — "source": "Backend Computation (Slow)"
    • On subsequent attempts (within 30 seconds), the response should be immediate — "source": "Redis Cache (Fast)"
5

Proving Data Persistency

In the docker-compose.yml file, we have defined a volume for Redis data (redis-data:/data). Now we must make sure that it's working properly and that data is not lost when the container is destroyed.

  1. Simulate a crash by destroying the entire solution:

    bash
    docker compose down
  2. Bring the application up again:

    bash
    docker compose up -d
  3. Make a request to the API again and make sure that the data is still cached.

  4. Cleanup the environment, also deleting the persistent volume:

    bash
    docker compose down -v