How to use Docker containers to implement proxy forwarding and data backup

How to use Docker containers to implement proxy forwarding and data backup

Preface

When we deploy applications to servers as Docker containers, we usually need to consider two aspects: network and storage.

In terms of network, some applications need to occupy ports, and some of them even need to provide external access.

For security reasons, proxy forwarding is more appropriate than directly opening firewall ports.

In terms of storage, since the container is not suitable for data persistence, data is generally saved on the server disk by mounting volumes.

However, the server cannot guarantee absolute security, so the data also needs to be backed up to the cloud.

Proxy forwarding

By default, the networks between containers are isolated from each other. However, for some related applications (web front-end container, server container, and database container), they are generally divided into an independent bridge subnet (hereinafter referred to as subnet) so that these containers can communicate with each other but at the same time be isolated from the outside.

For containers that need to provide access outside the subnet, you can map ports to the server host. The whole structure is as follows:


The above port mapping only solves the problem of the server (host) accessing the container network service. If we want to access the container on the server from the local machine through the Internet, it is generally not possible, because in addition to security considerations, the server will enable the firewall by default and only open a few ports such as 22.

For traditional network processes, the implementation method is to forward network requests through a reverse proxy server, such as using Nginx to configure the following proxy:

# Forwarding for different paths server {
 listen 80;               
 server_name www.xx.com;            

 location /a {
  proxy_pass localhost:1234;
 }
 location /b {
  proxy_pass localhost:2234;
 }
}
# Forward server for different domain names {
 listen 80;               
 server_name www.yy.com;            

 location / {
  proxy_pass localhost:1234;
 }
}

So the problem seems to be solved at this point, but what if Nginx is also running in a container?

Just now we mentioned that the subnet is isolated from external containers, so the Nginx container will not be able to access these external services.

You may easily think of dividing the Nginx container into corresponding subnets. The container does support the configuration of multiple subnets, but the trouble with this operation method is that every time a new subnet is added, the network configuration of the Nginx container needs to be modified and the container restarted.

So a better way is to set Nginx to HOST network mode. Abandon the isolation between the Nginx container and the server, and directly share the network and port with the server. Then the Nginx container can directly access all containers with mapped ports.

As shown in the following figure:


Data backup

Application Scenario

Taking speed and security into consideration, companies usually have some servers that are only accessible through the intranet. However, the data on these servers, including the servers themselves, may be modified or fail at any time.

Therefore, data backup is particularly important. Here we discuss smaller data backups.

Take the knowledge base server I recently built for my team as an example.

This web application is a small Python service that is deployed on an intranet server in the form of a container. It supports online editing and saves data in the form of md files.

Once a container fails, the internal data will no longer be accessible. Therefore, it is definitely unsafe to place the data directly in the container. The only way to allow the container and the server to share data reading and writing is to mount files.

So how to back up the data? Here we choose GitHub's private repository to save. There are three reasons:

  • Safety. Data is not easily lost or stolen.
  • Convenient, you only need to use the git command to back up.
  • fast. Because the volume and quantity of the backed up data are not large.

Although the method has been determined, there are still two problems to be solved:

  • Permission authentication is required to access the GitHub repository.
  • How to schedule or automatically submit data to GitHub.

Implementation

First, according to the principle of single responsibility of container, we should create a new container to perform the backup task.

Here we can use docker-compose or other orchestration tools to create multiple containers.

Then comes the permission authentication. Create an ssh key on the local machine and add it to the GitHub settings, so that the container can push files to the corresponding warehouse.

However, only the server can push code now, not the container, so you also need to copy the .ssh file to the container.

Finally, the implementation of automatic backup. A better way is to submit and push the code every time the file changes, but there is currently no simple way to monitor files in the container, so the next best option is to use a scheduled task strategy, that is, execute the corresponding git command every 5 minutes to submit and push files to the warehouse.

Here you can use a lightweight container based on the busybox image package, mount the project code into the container to ensure the synchronous update of the files, and then start the cron service to implement the operation.

Summarize

The above is the full content of this article. I hope that the content of this article will have certain reference learning value for your study or work. If you have any questions, you can leave a message to communicate. Thank you for your support for 123WORDPRESS.COM.

You may also be interested in:
  • Detailed explanation of how to copy and backup docker container data
  • Detailed explanation of psql database backup and recovery in docker
  • Docker uses the mysqldump command to back up and export mysql data in the project
  • Database backup in docker environment (postgresql, mysql) example code
  • Detailed explanation of docker command to backup linux system
  • Detailed explanation of backup, recovery and migration of containers in Docker
  • Detailed explanation of Docker data backup and recovery process

<<:  The core process of nodejs processing tcp connection

>>:  Solution to changing the data storage location of the database in MySQL 5.7

Recommend

Several practical scenarios for implementing the replace function in MySQL

REPLACE Syntax REPLACE(String,from_str,to_str) Th...

Linux disk sequential writing and random writing methods

1. Introduction ● Random writing will cause the h...

Summary of 3 ways to lazy load vue-router

Not using lazy loading import Vue from 'vue&#...

Use of Linux cal command

1. Command Introduction The cal (calendar) comman...

MySQL table field time setting default value

Application Scenario In the data table, the appli...

jQuery achieves breathing carousel effect

This article shares the specific code of jQuery t...

How to configure Hexo and GitHub to bind a custom domain name under Windows 10

Hexo binds a custom domain name to GitHub under W...

Docker configuration Alibaba Cloud Container Service operation

Configuring Alibaba Cloud Docker Container Servic...

How to configure NAS on Windows Server 2019

Preface This tutorial installs the latest version...

js and jquery to achieve tab status bar switching effect

Today we will make a simple case, using js and jq...

A detailed discussion of MySQL deadlock and logs

Recently, several data anomalies have occurred in...

Detailed explanation of JavaScript clipboard usage

(1) Introduction: clipboard.js is a lightweight J...