Create a React application container, aware of the CI/CD and development runtime environment

Commenti · 6443 Visualizzazioni

This blog achieves the issue of setting up a Docker container that is built once and runs everywhere by being configurable during runtime for the client side applications. Having a Docker image which runs with a single command on the local environment and renders itself depending on the environment variables passed during the production runtime environment is a more automated way to make things easy for everyone.

Problem

The biggest problem is React is a client side application which has a browser environment and does not have it's own server to contact once it is built inside the container, except when it is running on a development environment, as then the node js is serving as a process.

It becomes more complex when we are building a multi-stage(stag, beta and production) CI/CD pipeline to deploy and host our application to GCP or EC2 instance where the environment variables, domains and required parameter changes with each production environment.

When the npm produces React application as static files after Docker built, these files are not aware of the environment variables passed from "docker run" at runtime. So environment variables need to be injected in some other way at runtime which we will achieve using Nginx Alpine.

 

Solution

To achieve both the scenarios, that is to allow multiple developers to get started with the project with just one command without asking any platform specific questions and to run the react application with the environment variables passed inside the container. First we need an environment configuration file that contains default values to fall back to when no environment variables are provided to the container. 

As mentioned, the static files (html, js, css, images…) npm produces during the build phase of docker are final and ready for serving. But we can make them more dynamic and inject environment-specific configuration right before serving. Now we will create a configuration file which will contain runtime environment variables. Since the react app can read the global JavaScript window object, it can be added through bash script and  imported into our application using script tag inside the head section of index.html.

Example configuration file:

 

REACT_APP_BASE_URL=BASE_URL

REACT_APP_BRAND_NAME=BRAND_NAME

 

Implementation Steps

Configuration and environment files

Let’s start with a simple create-react-app project. I will make a big assumption that you already have a react project setup. Let's change the directory to that project. First create a configuration file called env-config.js as mentioned above with some default placeholders value. This file should be located in the public folder of the React app so that the npm, package it with static files  produced.

The same file will be used for development as well. It will be editable anytime depending upon the variables that individual developers want to place. This file is also used when we create our docker compose to run our container.

Now we will refer env-config.js file in the index.html to make it available at application loading. So, in the index.html the following line is added:

html lang="en"

    head

        script type="text/javascript" src="%PUBLIC_URL%/config.js"/script

        ...

    /head

    body

        div id="root"/div ...

    /body

/html

 

 

The next step would be to dynamically replace variable placeholders and assign the provided environment variables in config file. The right way which seems to me is to create a entrypoint.sh bash script which will be executed on running a Docker container.

Bash Script :

#!/bin/bash

# Recreate config file

rm -rf ./config.js

touch ./config.js

# Read each line in env-config.js file

# Each line represents key=value pairs

while read -r line || [[ -n "$line" ]];

do

# Split env variables by character `=`

if printf'%s\'"$line"| grep -q -e '=';then

varname=$(printf '%s\' "$line" | sed -e 's/=.*//')

varvalue=$(printf '%s\' "$line" | sed -e 's/^[^=]*=//')

fi

# Read value of current variable if exists as Environment variable

value=$(printf '%s\' ${!varname} | sed -e 's/\window.//g')

# Otherwise use value from env-config.js file

[[ -z $value ]] value=${varvalue}

 

# Append configuration property to temporary JS file

echo "window.$varname='$value';" 'gt;gt;' ./config.js

done 'lt;' ./build/env-config.js

# Replace the config file to the build location to be imported by index.html

cp ./config.js ./build/env-config.js

nginx -g 'daemon off;'

 

Note : 'gt;gt;' and 'lt;' signifies double greater than and signle less than signs (without quotes).These are added as escape character to be displayed in the clipborad 

 

Explaination(entrypoint.sh):

The above script first creates a temporary config file whose values will be replaced to the original env-config.js. Then it checks the presence of environment variables that are invoked by the process (which is Docker in this case) that matches with variables inside the env-config.js file while loop through each line of the file.

During each iteration the script append those variables with corresponding values to the window object and copy them to temporary config.js. If environment variables are not found it will fall back to the values inside env-config.js. At last it replaces /build/env-config.js file with config.js. After environment variables are injected into the env-config.js file, static files are ready to be served in the nginx server. Finally the script starts the nginx server by chaining with previous command.

This last command is very convenient to serve as an entry point for the Docker container run.


 

Example App Structure

Below is a basic view of the file structure of our React application.

.|____app
| |____public
| | |____env-config.js
| |____Dockerfile
| |____Dockerfile.dev
| |____docker-compose.yml
| |____entrypoint.sh

There are few other files that we will be using in the next section.

 

Dockerfile

A Dockerfile is a set instructions that docker uses to construct an image. We declare all the required softwares inside the file needed for our project.

Dockerfile.dev

The Dockerfile.dev will look something like this:

FROM node:13.12.0-alpine

WORKDIR /app

COPY package.json .

RUN npm install

COPY . ./

CMD ["npm", "start"]

 

Explaination: 

Here's what's happening. Since Docker needs to be explicitly told what software to use to run a React project such as Node.js and NPM. We define the following steps.

  • We’re telling Docker to use node as base image and specify its Linux distribution as Alpine. Docker uses the Alpine Linux distribution by default. Why Alpine? Alpine Linux is much smaller than most distribution base images (~5MB), and thus leads to much slimmer images in genral.
  • After that we set our working directory for the Docker container with WORKDIR command. So all the further commands will be executed in the specified working directory.
  • Copy the package.json from our React project to Docker container.
  • Install all the dependencies and copy the rest of our application to the Docker container.
  • At last run the development script with the command "npm run start". Remember we have our env-config.js file, whose default placeholders will be used now as a window object value for the application passed by index.html. Ofcourse we can change the default values of the file as we want depending upon the development environment.

 

docker-compose.yml

Let's create the docker-compose file to allow the developers or any engineer to run the project with the single command called "docker-compose up". Compose is a useful tool to chain and control all the docker commands and services together. This will be most of the time used for the development and testing purposes.

Let's create a service called react_client and specify the parameters for it.

version: "3.8"

services:

react_client:

stdin_open: true

build:

context: .

dockerfile: Dockerfile.dev

ports:

- "3000:3000"

volumes:

- "/app/node_modules"

- "./:/app"

react_client_prod:

build:

context: .

dockerfile: Dockerfile

ports:

- "3000:80"

volumes:

- "/app/node_modules"

- "./:/app"
 
 
Explaination:
 
Here's what's happening inside the docker-compose.yml.
 
  • We specify a version for Compose. Make sure Compose is compatible with Docker, here’s a full list of all versions. For this case it is version 3.8, so we’re going use that.
  • Define the react_client service so we can run it in an isolated environment.
  • Since there are multiple services with different Dockerfile, we specify the Dockefile for the react_client. For development, we are going to use Dockerfile.dev.
  • Next, we map the port 3000 to Docker. The React application runs on port 3000, so we need to tell Docker which port to expose for our application.
  • There is another service react_client_prod which we have created for the purpose of production container. We will look that in a bit.
 
 

Dockerfile

As of now we have Dockerfile.dev for development, let's create Dockerfile for the production environment. The production Dockerfile is a bit different as it requires some build steps to convert react project into static files and use a Nginx server to serve the application inside the container.

 

FROM node:13.12.0-alpine as build

WORKDIR /app

COPY package.json ./

COPY package-lock.json ./

RUN npm ci --silent

RUN npm install react-scripts@3.4.1 -g --silent

COPY . ./

RUN npm run build

FROM nginx:1.18.0-alpine

RUN apk update apk add bash

# copy the build folder from react to the root of nginx (www)

COPY --from=build /app/build /usr/share/nginx/html/build

# --------- only for those using react router ----------

RUN rm /etc/nginx/conf.d/default.conf

COPY nginx.conf ./etc/nginx/conf.d

# --------- /only for those using react router ----------

# Copy .env file and shell script to container

WORKDIR /usr/share/nginx/html

COPY entrypoint.sh ./

RUN chmod +x entrypoint.sh

# run commands in the running container and start nginx

ENTRYPOINT ["bash", "./entrypoint.sh"]

 

  • The Dockerfile uses multistage build to achieve both build and serving within the same container.
 It starts from a Node base image similar to our Dockerfile.dev and change the WORKDIR for the /app.
  • It builds React application the usual way by copying all the relevant files and installing libraries using "npm install".
  • We need an additional bash to run our entrypoint.sh for passing environment variables inside the container.
  • Then, it takes the Nginx base image for serving purposes and copies built static files to a new image, while the previous intermediate image is removed and the image size is reduced. 
  • Finally, it copies shell scripts, attaches execution permissions and exposes the entry point command which starts the nginx service.

 

Development

 Use Compose

There are two flexible ways of running the application while development and testing.

  • The first way includes running Docker container in the development environment. Here the developer can run the container with single command "docker-compose up react_client" without any environment variables.
  • Also the individual developers can change their application level window object variables as per their requirements.
  • This approach is really helpful when multiple developers are collaborating to the same codebase and working on different modules or there comes a new developer who is unaware of the setup requirements for the application.
  • The second approach is used when developers want to test their code commits by simulating the production environment. They simply can do "docker-compose up react_client_prod". Remember to change the env-config.js accordingly because the Docker container will fall back to the deafult values inside this config file as we are not providing any runtime environment variable with the command on the local machine.

Once all the files are written with the configuration, we can simply run docker-compose up service_name command and Docker builds up everything for us.

Once the docker-compose is done doing it's stuff the following ouput should be seen:

compose_react_client 

Open your browser on "http://localhost:3000" you will see your application running as excpected.

 

Production Deployment

Use Compose

Since we already know how to test production container environment on our local machine before deployment. Lets's simulate the original production scenario where there is a pipeline define for CI/CD flow to push our code to the Docker Hub and run it as an EC2 instance in AWS or Container in Google compute Engine.

To try out this configuration locally, the following command can be run:

docker-compose run -it -e REACT_APP_BASE_URL=https://googleapi.outh.com -e REACT_APP_BRAND_NAME=DE -p 3000:80 --rm react-docker-app_react_client_prod

 This command will simulate how the Docker container will actually be run by the hosted cloud with the environment variables on each stage of the Agile cycle. If everything went well, there should be a React application running on "http://localhost:3000".

In the browser inspector (like Google Chrome Dev Tools) there should be a env.config.js file rewritten with variable values passed from CI/CD deployment. 

window.REACT_APP_API_BASE_URL="https://googleapi.outh.com"
window.REACT_APP_ENV_NAME="DE"

Accessing environment variables in React code is done through the window object.

 

Conclusion

The summary of the approach used in this blog at a high level:

  • Create Dockerfile for production, Dockerfile.dev for development and docker-compose.yml for chaining and controlling the multiple services.
  • Create a env-config.js file with environment variable placeholders in public folder in the React application.
  • Add env-config.js reference in script tag in index.html.
  • Create shell scripts (entrypoint.sh) for rewriting env-config.js file at runtime.
  • Use scripts after running nginx server in Dockerfile as entry point

With this approah described React container will be aware of environment variables passed during development, production or CI/CD pipeline at runtime through a dynamically replaceable env-config.js file. The benefit of the above described approach is that container is built only once and reused for various environments depending on runtime variables which removes the headache for everyone and automate the post development strategy. And also build time will be shortened significantly.

Commenti