Difference between revisions of "CNM Bureau Farm"

From CNM Wiki
Jump to: navigation, search
(Web servers)
(Load balancer)
Line 122: Line 122:
 
==Gateway components==
 
==Gateway components==
 
===Load balancer===
 
===Load balancer===
: Being specifically installed on [[CNM pfSense]], [[HAProxy]] serves as a [[load balancer]]. As of summer of 2023, a [[round robin]] model is activated.
+
: As a [[load balancer]], [[CNM pfSense]] uses an edition of [[HAProxy]] that is specifically configured as [[HAProxy]]'s add-on. As of summer of 2023, no [[HAProxy Manager]] exists in the ''Farm''.
 +
 
 +
: As of summer of 2023, a [[round robin]] model is activated for load balancing.
  
 
===Firewall and router===
 
===Firewall and router===

Revision as of 23:50, 7 August 2023

CNM Bureau Farm (formerly known as CNM EndUser Farm; hereinafter, the Farm) is the CNM farm that hosts CNM Social, CNM Talk, and CNM Venture (hereinafter, the Apps). The Apps are described in the #End-user applications section of this wikipage.

Technically, the Farm is a collection of commercial off-the-shelf (COTS) software. End-users work with the Apps that are installed in #Virtual environments (VE) that are, consequently, installed on #Node OS, which is node-root-level operating system (OS), that is installed on the #Infrastructure, which consists of infrastructure-level #Bridges and hardware, which includes #Backup server and #Bare-metal servers.

To eliminate a single point of failure, the Farm is built on three #Bare-metal servers. Each of those hardware servers with all of software installed on the top of it (hereinafter, the Node) is self-sufficient to host the Apps. #High availability (HA) tools orchestrate coordination between the Nodes.


End-user applications

Literally, the Apps are those end-user applications with which end-users of the Farm interact. The Apps can be deployed utilizing two models:

  1. Using containers; they already contain operating systems tailored specifically to the needs of the App.
  2. In virtual machines (VM) and without containers. In that model, the App is installed on the operating system of its VM.

HumHub

CNM Social, which is the end-user instance of CNM HumHub.

Odoo

CNM Venture, which is the end-user instance of CNM Odoo.

Jitsi

CNM Talk, which is the end-user instance of CNM Jitsi.

Virtual environments (VE)

CNMCyber Team uses virtualization to divide excessive hardware resources of #Bare-metal servers in smaller containers and virtual machines (VMs), which are created in virtual environments (VEs).

As its software for VEs, the Farm utilizes CNM ProxmoxVE. Every instance of CNM ProxmoxVE is installed on #Node OS, which require "physical" #Bare-metal servers. The Farm's CNM ProxmoxVE also utilizes #Storage platform as its storage.

Choice of VE COTS

CNMCyber Team has tried OpenStack and VirtualBox as its virtualization tools. The trials suggested that OpenStack required more hardware resources and VirtualBox didn't allow for required sophistication in comparison with ProxmoxVE, which has been chosen as COTS for the Farm's virtualization.

Node OS

The interaction between CNM ProxmoxVE instances and the #Infrastructure is carried out by Debian operating system that comes in the same COTS "box" as ProxmoxVE and is specially configured for that interaction.

Storage platform

To make objects, blocks, and files immediately available for the Apps' operations, the Farm uses CNM Ceph as its storage. This common distributed cluster foundation orchestrates storage spaces of the individual Nodes.

Choice of storage COTS

CNMCyber Team has tried OpenZFS and RAID as the Farm's storage. Initially, Ceph was proposed by the first cluster developer. Later, the team substituted one node with another with higher hard disk, but without SSD and NVMe; as a result, the Farm's storage collapsed. The substituted node was disconnected (today, it serves as hardware for CNM Lab Farm), a new bare-metal server was purchased (today, it is the #Node 3 hardware) and Ceph restored.
As COTS, ProxmoxVE comes with OpenZFS. CNMCyber Team has deployed the combination of both in its CNM Lab Farm.

Deployment model

At the Farm, CNM Ceph is deployed at every Node. Every of #Bare-metal servers features doubled hard disks. Physically, CNM ProxmoxVE is installed on one disk of each Node; CNM Ceph uses three "second" disks. Since every disk is 512 GB, the Farm's CNM Ceph capacity is about 3 * 512 GB = 1.536 GB.
While experimenting with OpenZFS and RAID, CNMCyber Team has also tried another model. The second disks then served as reserve copies of the first ones.

High availability (HA)

High availability (HA) of the Farm assumes that no failure of any App or its database management system (DBMS) can cause the failure of the Farm as a whole. HA tools are based on:

  • A principle of redundancy; that is why the Farm is built on three Nodes, not one. Every App is installed at least twice on different Nodes as described in the #HA at the App level section. Every object, block, or file is stored at least twice on different Nodes as described in the #HA at the DBMS level section.
  • Management of redundant resources as described in the #HA management section. In plain English, the Farm needs to put into operations those and only those resources that are in operational shapes.

Generally speaking, HA comes with significant costs. So does HA of the Farm. At very least, running three Nodes is more expensive than running one. The cost cannot exceed the benefit, so high availability cannot be equal to failure tolerance.

HA at the App level

When one App fails, its work continues its sister App installed on the second Node. If another App fails, its work continues its sister App installed on the third Node. If the third App fails, the Farm cannot provide its users with the App services any longer.
To ensure that, the Farm utilizes tools that come with ProxmoxVE. Every virtual machine (VM) or container is kept on at least two Nodes. When the operational resource, VM or container, fails, CNM ProxmoxVE activates another resource and creates the third resource as a reserve. As a result, VM or container "migrates" from one Node to another Node.

HA at the DBMS level

When one DBMS fails, its work continues its sister DBMS installed on the second Node. When another DBMS fails, its work continues its sister DBMS installed on the third Node. If the third DBMS fails, the Farm can no longer provide the App with the data it requires to properly work.
To ensure that, the Farm utilizes the #Storage platform. Every object, block, or file is kept on at least two Nodes. When any stored resource fails, #Storage platform activates another resource and creates the third resource as a reserve. As a result, any stored resource "migrates" from one Node to another Node.

HA management

To manage redundant resources, the Farm:
  • Monitors its resources to identify whether they are operational or failed as described in the #Monitoring section.
  • Fences those resources that are identified as failed. As a result, non-operational resources are withdrawn from the list of available.
  • Restores those resources that are fenced. The #Backup and recovery supports that feature, while constantly creating snapshots and reserve copies of the Farm and its parts in order to make them available for restoring when needed.

Web architecture

For the purposes of this wikipage, "web architecture" refers to the Farm's outline of DNS records and IP addresses.

Channels and networks

The Farm's communication channels are built on the #Bare-metal servers and #Bridges. Currently, the Farm uses three communication channels, each of which serves one of the network as follows:
  1. WAN (wide area network), which is the Farm's public network that uses external, public IPv4 addresses to integrate the #LAN gateway into the Internet. The public network is described in the #LAN gateway section of this wikipage.
  2. LAN (local area network), which is the Farm's private network that uses internal, private IPv6 addresses to integrate #LAN gateway and the Nodes into one network cluster. This network cluster is described in the #Virtual environments section of this wikipage.
  3. SAN (storage area network), which is the Farm's private network that uses internal, private IPv6 addresses to integrate storage spaces of the Nodes into one storage cluster. This storage cluster is described in the #Storage platform section of this wikipage.
The Farm's usage of IP addresses is best described in the #IP addresses section.

DNS zone

To locate the Farm's public resources in the Internet, the following DNS records are created in the Farm's DNS zone:
Field Type Data Comment (not a part of the records)
pm1.bskol.com AAAA record 2a01:4f8:10a:439b::2 Node 1
pm2.bskol.com AAAA record 2a01:4f8:10a:1791::2 Node 2
pm3.bskol.com AAAA record 2a01:4f8:10b:cdb::2 Node 3
pf.bskol.com A record 88.99.71.85 CNM pfSense
npm1.bskol.com A record 88.99.218.172 Node 1 Nginx
npm2.bskol.com A record 88.99.71.85 Node 2 Nginx
npm3.bskol.com A record 94.130.8.161 Node 3 Nginx
talk.cnmcyber.com A record 2a01:4f8:fff0:53::2 CNM Talk (CNM Jitsi)
corp.cnmcyber.com A record 2a01:4f8:fff0:53::3 CNM Venture (CNM Odoo)
social.cnmcyber.com A record 2a01:4f8:fff0:53::4 CNM Social (CNM HumHub)

IP addresses

To locate its resources in the #Communication channels, the Farm uses three types of IP addresses:
  1. To access #Virtual environments (VE) of various Nodes from the outside world, the Farm features public IPv6 addresses. One address is assigned to each Node. Since there are three Nodes, three addresses of that type are created.
  2. For an internal network of three Nodes, which is assembled on the internal Bridge, a private IP address is used. This network is not accessible from the Internet and not included in the Farm's DNS zone. For instance, the #Storage platform utilizes this network to synchronize its data. For this network, an address with the type "/24" is selected.
  3. For an external network of three Nodes, which is assembled on the external Bridge, the Farm features public IPv4 addresses. They are handled by #Web intermediaries.

LAN gateway

For the purposes of this wikipage, "LAN gateway" (hereinafter, the Gateway) refers the Farm's local area network (LAN) that includes a load balancer and reverse proxy.

Choice of LAN COTS

Functions

The Gateway can be compared to an executive secretary, who (a) takes external client's requests, (b) serves as a gatekeeper, while checking validity of those requests, (c) when the request is valid, selects to which internal resource to dispatch it, (d) dispatches those requests to the selected resource, (e) gets internal responses, and (f) returns them back to the client in the outside world.
Thus, the Gateway (a) receives requests from the world outside of the Farm, (b) serves as a firewall, while checking validity of those requests, (c) when the request is valid, selects to which Node to dispatch it, (d) dispatches those requests to the selected Node, (e) gets internal responses, and (f) returns those responses to the outside world.
The Gateway is responsible for dispatching external requests to those and only to those internal resources that the Farm's #Monitoring has identified as operational. To be more accessible to its clients, the Gateway utilizes public IPv4 addresses.

Gateway components

Load balancer

As a load balancer, CNM pfSense uses an edition of HAProxy that is specifically configured as HAProxy's add-on. As of summer of 2023, no HAProxy Manager exists in the Farm.
As of summer of 2023, a round robin model is activated for load balancing.

Firewall and router

Reverse proxy, firewall

Web servers

As its webserver, pfSense utilizes lighttpd. Prior to deployment of CNM pfSense, CNMCyber Team utilized two web servers to communicate with the outside world via HTTP. Nginx handled requests initially and Apache HTTP Server handled those requests that hadn't handled by Nginx.

Security tools

Monitoring

Сейчас не используется специальные функции.
Предложения кандидатов:
  1. Стек -- prometheus + node-exporter + grafana
  2. Prometheus to monitor VMs, Influx to monitor Pve nodes , Grafana for Dashbord
  3. (M) grafana + influxdb + telegraf, а также zabbix. Для мониторинга веб-сайта использовать uptimerobot

Firewalls

iptables as a firewall
For security, we use Fail2ban because it operates by monitoring log files (e.g. /var/log/auth.log, /var/log/apache/access.log, etc.) for selected entries and running scripts based on them. Most commonly this is used to block selected IP addresses that may belong to hosts that are trying to breach the system's security. It can ban any host IP address that makes too many login attempts or performs any other unwanted action within a time frame defined by the administrator. Includes support for both IPv4 and IPv6.

Backup and recovery

Accesses

End-user access

End-users of the Farm (hereinafter, the Patrons) access the Apps and the Apps only. Those users cannot access to the #Bare-metal servers, #Backup server, #Virtual environments (VEs), as well as #Security tools.
The Patrons access the Apps via those IPv4 addresses that are associated with the particular App. Opplet.net provides the Patrons with access automatically or, by bureaucrats or other power-users, manually.

Power-user access

Power-users of the Farm (hereinafter, the Admins) are those users who have authorized to access more resources of the Farm than a regular Patron.
  1. Hardware-level admin. Administrative access to #Bare-metal servers and #Backup server is carried out without any IP addresses, through the administrative panel and administrative consoles that #Service provider grants to CNMCyber Customer. The customer grants hardware-level admin access personally.
  2. VE-level admin. Administrative access to #Virtual environments (VEs) and #Security tools is carried out through IPv6 addresses linked to those tools. Access credentials are classified and securely stored in CNM Lab.
  3. App-level admin. Administrative access to the Apps is carried out through the IPv4 addresses associated with the particular App. At the moment, those accesses are provided by other Admins manually.

Infrastructure

The infrastructure of the Farm consists of hardware, #Bare-metal servers and #Backup server, as well as #Bridges rented from the #Service provider.

Service provider

Hetzner has been serving as CNMCyber Team's Internet service provider (ISP) and lessor of the #Infrastructure since 2016. Offers from other potential providers, specifically, Contabo and DigitalOcean, have been periodically reviewed, but no one else has offered any better quality/price rate on a long-term basis.

Choice of bare-metal

Due to the lower cost, #Bare-metal servers were purchased via #Service provider's auction -- https://www.hetzner.com/sb?hdd_from=500&hdd_to=1000 -- based on the following assumptions:
  • Number: ProxmoxVE normally requires three nodes. The third node is needed to provide quorum; however, it shall not necessarily run applications. At the same time, Ceph requires three nodes at least.
  • Hard drives:
    1. The hard drive storage capacity for any Node shall be 512Gb at least.
    2. Because Ceph is selected to power the #Storage platform, any hard-drive of the Farm shall be both SSD and NVMe.
  • Processors:
    1. The processor frequency for two Nodes of the Farm shall be 32Gb at least. Processor frequency requirements to the third Node may be lower because of ProxmoxVE's characteristics.
    2. Those servers that deploys Intel Xeon E3-1275v5 processors are preferable over those servers that deploys Intel Core i7-7700 ones.
  • Location: At least two Nodes shall be located in the same data center. Although the #Service provider does not charge for internal traffic, this circumstance increases the speed of the whole Farm. If no nodes are available in the same data center, they shall be looked for in the same geographic location.
The hardware characteristics of the chosen Nodes are presented in #Bare-metal servers.

Bridges

Сеть каждого Узла использует мост по выбираемой по умолчанию в Network Configuration модели.

Hetzner vSwitches (hereinafter, the Bridges) serve as bridges for #Communication channels to connect the Nodes in networks and switch from one Node to another one. The #Service provider provides CNMCyber Team with the Bridges; the team can order up to 5 connectors to be connected to one Node. The Bridges come with the lease of the Nodes.
The Farm utilizes two Bridges:
  1. Internal Bridge serves as the hub for node and storage networks. It is located on an internal, private IPv6 address to provide for data transfer between the Nodes and their storage spaces.
  2. External Bridge serves as the hub for the public network, the Internet. It is located on external, public IPv4 address to provide for data transfer between the Farm's publicly-available and other Internet resources.
The Farm cannot support high availability of the Bridges. Resiliency of the Bridges is the courtesy of their owner, #Service provider.

Backup server

A backup server is deployed on a 1 TB, unlimited traffic storage box BX-11 that has been rented for that purpose.

Basic features

10 concurrent connections, 100 sub-accounts, 10 snapshots, 10 automated snapshots, FTP, FTPS, SFTP, SCP, Samba/CIFS, BorgBackup, Restic, Rclone, rsync via SSH, HTTPS, WebDAV, Usable as network drive

Choice of backup COTS

Proxmox Backup Server

Description

#Service provider's description: Storage Boxes provide you with safe and convenient online storage for your data. Score a Storage Box from one of Hetzner Online's German or Finnish data centers! With Hetzner Online Storage Boxes, you can access your data on the go wherever you have internet access. Storage Boxes can be used like an additional storage drive that you can conveniently access from your home PC, your smartphone, or your tablet. Hetzner Online Storage Boxes are available with various standard protocols which all support a wide array of apps. We have an assortment of diverse packages, so you can choose the storage capacity that best fits your individual needs. And upgrading or downgrading your choice at any time is hassle-free!

Bare-metal servers

The #Virtual environments (VEs) are deployed on three bare-metal servers. As the result of #Choice of bare-metal, #Node 1 hardware, #Node 2 hardware, and #Node 3 hardware have been rented for that purpose.

Node 1 hardware

1 x Dedicated Root Server "Server Auction"
  • Intel Xeon E3-1275v5
  • 2x SSD M.2 NVMe 512 GB
  • 4x RAM 16384 MB DDR4 ECC
  • NIC 1 Gbit Intel I219-LM
  • Location: FSN1-DC1
  • Rescue system (English)
  • 1 x Primary IPv4

Node 2 hardware

1 x Dedicated Root Server "Server Auction"
  • Intel Xeon E3-1275v5
  • 2x SSD M.2 NVMe 512 GB
  • 4x RAM 16384 MB DDR4 ECC
  • NIC 1 Gbit Intel I219-LM
  • Location: FSN1-DC1
  • Rescue system (English)
  • 1 x Primary IPv4

Node 3 hardware

1 x Dedicated Root Server "Server Auction"
  • Intel Xeon E3-1275v5
  • 2x SSD M.2 NVMe 512 GB
  • 4x RAM 16384 MB DDR4 ECC
  • NIC 1 Gbit Intel I219-LM
  • Location: FSN1-DC1
  • Rescue system (English)

See also

Related lectures

Used terms

On this wiki page, the following terms are used:
  • Admin. A power-user of the Farm or any user who has authorized to access more resources of the Farm than a regular Patron.
  • App. Any of three end-user applications with which end-users of the Farm interact.
  • Bridge. A Hetzner vSwitch that the Farm utilizes.
  • Farm. CNM Bureau Farm, this very wikipage describes it.
  • Gateway. The Farm's LAN that includes a load balancer or reverse proxy.
  • Node. One Farm's hardware server with all of software installed on the top of it.
  • Patron. An end-user of the Farm.

Useful recommendations