Dockerize Angular development

Dockerize Angular development

Introduction to Docker

Docker is an open platform for developing, shipping, and running applications. Docker enables you to separate your applications from your infrastructure so you can deliver software quickly. With Docker, you can manage your infrastructure in the same ways you manage your applications. By taking advantage of Docker’s methodologies for shipping, testing, and deploying code quickly, you can significantly reduce the delay between writing code and running it in production.

Docker for frontend

Now, docker can be used across all the teams like the front, backend, and DevOps. Let's dockerize the Angular app. Here we will use docker to run the angular application as we run on the local machine with all the requisites libraries installed.

Using DockerFile

  1. Install the docker

  2. Create a new Angular application ng new ng-docker-demo

  3. This app can be run on local machines with npm run start or with ng serve

  4. To dockerize the app, create a Dockerfile like the below:

# Docker Image
FROM node:14-alpine 
# Working Directory Path in the container
WORKDIR /app
# Caching package.json
COPY package.json /app
RUN npm install
# Copy code from current directory(host) to the working directory(container)
COPY . /app
# Expose container PORT to host
EXPOSE 4200
CMD ["npm", "start"]

Here we are using node:14-alpine images based on node 14.x, which is picked from the Docker Hub. There are a lot of images available to use. Pick your images as per your requirement. Then we've set WORKDIR to /app which can be anything as per your choice. Every command starting with RUN or CMD will have this folder as the default working directory.
The next step is to copy the source files ( COPY) and install the dependencies. We copy the package.json separately from the rest of the files. Why? Because Docker caches every step of the Dockerfile when building the image multiple times. When don't modify anything and build the image again, it won't do anything as the steps are cached. If we change a Javascript file, Docker will run the commands from COPY . /app. When we modify the package.json file, Docker will rerun the commands from COPY package.json /app.

Also, create .dockerignore file, like below otherwise while COPY step, it will copy the entire node_modules as well, which will end up with a large size image.

node_modules

By default, applications running inside the container on a specific port are not available on the host machine. We have to make the port available ( EXPOSE). After this, we can type the URL in our browser ( http://localhost:4200) and see the result.

Simultaneously, do a small change in the npm script change the start script to "start": "ng serve --host 0.0.0.0". This will serve the app on localhost.

  1. To run this image, we have to build it and run the created container.
# Build the image: docker build -t <image-name> <relative-path-to-dockerfile>
docker build -t client . 
# Run the image: docker container run -p <host port:container port> <image-name>  
docker container run -it --name my-docker-container --rm -p 4300:4200 client

# Use “--rm” so Docker Container will be removed automatically when we stop the Container
# Use "--name" flag to assign any name to the container
# Use "-it" from interactive logging
  1. With the above command, it will serve the Angular app on port 4300 http://localhost:4300 (on the host machine).

There are some limitations of this approach:

  • Files generated inside the container are not visible from the host machine. It means that we won’t see the node_modules folder on our host machine, and because of this, we lose code completion in the editor. We can't commit the generated package.lock.json to source control because it is not available on the host machine also.

  • We have to stop, build, and rerun the container on dependency and file changes, which include a few commands to do it. We lose the ability to live-reload. Basically, all the developer-friendly tools won't work with this approach.

Using Docker-compose

  • Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration.

    Compose works in all environments: production, staging, development, testing, as well as CI workflows. It also has commands for managing the whole lifecycle of your application:

    With the help of previously written Dockerfile, we will write our docker-compose.yml a file like the below:

    version: '3'
    services:
      client:
        image: node:14-alpine
        working_dir: /app
        volumes:
          - ./:/app
        ports:
          - 4201:4200
        command: sh -c "npm install && npm start"
    

    Every Dockerfile can be one service in docker-compose. Each service has a unique key equivalent to naming the build-like docker build -t <image-name> command.

The key difference comes from the volumes property. By using it, the local folder is synchronized with the container. If we execute the npm install command in the container, the node_modules folder will appear on the host machine also: we get the code completion and the lock file.

To run this setup, we have to build it and run the built container, which can be done in a single command: docker-compose up

Conclusion

Both solutions are identical. Using docker in the frontend project helps the new employee to kick start the project very quickly. Although this is a very straightforward project, real-time projects are more complex than this, where the use of docker can save a lot of time and cost.

Did you find this article valuable?

Support Vivek Kasture by becoming a sponsor. Any amount is appreciated!