Deliver complete, comprehensive testing in less time to ensure quality.
Agile patterns are widely used, and testing is particularly important. Due to the need to release new versions frequently, we need to execute test cases more frequently to ensure that no new bugs are introduced into the release.
The time and resources required for a complete testing process should not be overlooked, including the analysis of test results. How to provide complete and comprehensive testing to ensure quality in a shorter period of time is a problem that we are eager to solve, and it is also the key to ensure that agile development can go smoothly.
Jenkins implements an unattended testing process, and once the development is complete, downstream test tasks are executed as soon as they are deployed in the test environment.
Jenkins applications save human resources to a certain extent, while Docker technology can achieve rapid expansion of containers, thereby saving a lot of equipment resources and time, and quickly completing tests. This is a very important part of the Jenkins pipeline, as shown in Figure 1
Figure 1 jenkins pipeline
This topic describes how to use the Docker Swarm cluster feature and the Selenium Grid script distribution feature to build a Selenium automated script execution environment that can be dynamically scaled up. Compared to using a real machine as a Selenium automated script execution environment, using this environment can greatly reduce the maintenance of the execution environment, such as the management of various browser types and versions. It can also greatly reduce the material investment in the script execution environment and save all kinds of resources.
Swarm is a cluster management tool provided by Docker to manage Docker clusters, which abstracts several Docker hosts into a whole, and manages various Docker resources on these Docker hosts through a single portal.
Swarm is just a scheduler and router, Swarm doesn't run containers by itself, it just accepts requests from docker clients and schedules suitable nodes to run containers, which means that even if Swarm goes down for some reason, the nodes in the cluster will run as usual, and when Swarm resumes running, it will collect information to rebuild the cluster.
Swarm is similar to Kubernetes, but lighter and has fewer features than Kubernetes.
In order to set up a docker swarm cluster environment, I prepared two machines in the example. One acts as a manager node and also as a worker node, and the other only acts as a worker node.
Let's assume that our two machine IP information is as follows:
m1:m2:
docker engine from v112.Since version 0, docker swarm is natively integrated, so as long as docker is installed on each machine, you can use docker swarm directly. Here, the installation of docker will not be described in detail, please follow the official docker swarm documentation to install it. Once the installation is complete, start the docker service on each machine. Hint:
Note: It's a good idea to turn off the firewall on your machine, otherwise you may have network connectivity issues with your swarm cluster.
Command to turn off the firewall:systemctl stop firewalld.service
Disable Firewall Boot Startup Command:systemctl disable firewalld.service
1. Create a management node.
We use machine m1 as the manager node, and execute commands on this machine to initialize the cluster environment. The command is as follows:
sudo docker swarm init --advertise-addr 10.13.181.1
When this command is executed, a token is returned to join the cluster so that other workers can join the cluster.
Listing 1An example of a token to join a cluster.
docker swarm join --token swmtkn-1-5p3kzxhsvlqonst5wr02hdo185kcpdajcu9omy4z5dpmlsyrzj-3phtv1qkfdly2kchzxh0h1xft 10.13.181.1:2377
2. If you want to get the command to join the cluster again, you can run the following command:sudo docker swarm join-token worker
3. Add machine m1 to the cluster as a worker node. Run the Listing 1 command on the manager node machine to add machine m1 to the swarm cluster as a worker.
4. Add another machine, M2, to the cluster as a worker node. Execute the command in Listing 1 above on machine M2 to realize the function of M2 joining the cluster.
5. Run the following command to create a cluster network:
sudo docker network create -d overlay seleniumnet
In this case, SeleniumNet is the name of the cluster network we created.
6. Create the Selenium Grid service on the newly created cluster network.
a.Create a Selenium Grid Hub service. Based on the cluster network SeleniumNet, port 4444 is mapped to port 4444 of the cluster. Set the timeout time to 120 seconds to increase or decrease the timeout time. As shown in Listing 2:
Listing 2Create a Selenium Grid Hub service.
sudo docker service create --name selenium-hub --network seleniumnet -p 4444:4444 -egrid_timeout=120 selenium/hub
b.Create the Selenium Grid Firefox node service and connect to the Hub service you just created. As shown in Listing 3:
Listing 3Create a Selenium Grid Firefox node service
sudo docker service create \ name node-firefox \ replicas 5 \ p 7900:5900 \ network seleniumnet \ e hub_port_4444_tcp_addr=selenium-hub \ e hub_port_4444_tcp_port=4444 \selenium/node-firefox-debug bash -c 'se_opts="-host $hostname" /opt/bin/entry_point.sh'
Parameter description:-p: 7900:5900
The internal VNC5900 of Docker is exposed to port 7900 of the host, so that users can monitor the internal execution of Docker from the outside through VNC.
c.Create the Selenium Grid Chrome Node service and connect to the Hub service you just created. As shown in Listing 4:
Listing 4Create a node service
sudo docker service create \ name node-chrome \ replicas 3 \ p 7901:5900 \ network seleniumnet \ e hub_port_4444_tcp_addr=selenium-hub \ e hub_port_4444_tcp_port=4444 \selenium/node-chrome-debug bash -c 'se_opts="-host $hostname"/opt/bin/entry_point.sh'
Parameter description:-p: 7901:5900
The internal VNC5900 of Docker is exposed to port 7901 of the host, so that users can monitor the internal execution of Docker from the outside through VNC.
7. Check whether the environment is built successfully. Execute the following command on machine m1 to see if each service starts successfully:
sudo docker service ls
You can see that the Selenium Hub and Firefox nodes and Chrome nodes have all been successfully started. Firefox has 5 node replicas and Chrome has 3 node replicas. As shown in Figure 2
Figure 2docker service manifest.
We then open the Selenium Hub URL through the IP and port 4444 of any machine to check whether the booted Firefox and Chrome nodes have been successfully mounted to the Hub node. As shown in Figure 3
hub url: 10.13.181.1:4444
Figure 3Diagram of the Selenium Hub's interface.
As you can see in Figure 3, 5 Firefox nodes and 3 Chrome nodes have been successfully mounted to the hub node. Note that 5 Firefox nodes and 3 Chrome nodes are provided in the Docker Swarm environment to execute Selenium automated test scripts.
For example, we need 10 containers that can run Firefox browsers, and the corresponding commands are as follows:
sudo docker service scale node-firefox=10
When you run a Jenkins job in Docker Swarm, you don't need to do redundant configuration in Jenkins, but you need to call Selenium Hub in the corresponding automation script, so as to call WebDriver remotely. This makes it possible to run Selenium scripts in a docker container.
Using the scenario in this article as an example, you only need to call the remote Selenium Hub in the automation script, as shown below:
Selenium Grid for distributed automated testing, which means that a set of Selenium ** can be run on different environments, which makes it easy to run applications in different containers provided by Docker.
Selenium Grid has two concepts:
hub: the master node, which you can think of as the master dispatch center. node: A branch node, which you can think of as a worker that actually performs tasks. That is to say, there can only be one master hub in Selenium Grid, but n branch nodes can be set up locally or remotely, and the test script points to the master hub, which is assigned by the master hub to the local remote node to run test cases.
To run the automation script in Selenium Grid, we first need to create a Remote Driver object, which can be implemented from the source code in Figure 4, and the input parameter Selhub in the screenshot is the URL of Selenium Hub
Figure 4Screenshot of the automation script.
By calling the driver described above, you can run the automation script in the docker container.
In continuous integration testing, deploying tests to Docker Swarm and automatically allocating nodes to execute tests through Selenium Grid can improve test efficiency, increase the scope of tests, and better ensure the quality of delivered products and save test resources in rapid iteration.