Обсуждение:Кампусна Ферма — различия между версиями
Vitaliy (обсуждение | вклад) (→B.T.R.N.) |
Vitaliy (обсуждение | вклад) (→B.T.R.N.) |
||
Строка 41: | Строка 41: | ||
For HA, there are two things to take care. 1) Data and 2) Services. | For HA, there are two things to take care. 1) Data and 2) Services. | ||
If we use NAS then data replication we dont need to worry. So other nodes will have similar setup but with their services down and bring them up in case of fail over switch. This is taken care by Pacemaker. In summary, other nodes will have almost similar setup but will slightly different setup and its services down. | If we use NAS then data replication we dont need to worry. So other nodes will have similar setup but with their services down and bring them up in case of fail over switch. This is taken care by Pacemaker. In summary, other nodes will have almost similar setup but will slightly different setup and its services down. | ||
+ | :Another approach is to setup load balancer using HAProxy and Gluster to replicate the shared volumes. You will have a easy to use web interface to manage the cluster. |
Версия 20:15, 6 августа 2022
Basing on Delova Farm
Since you have already had an HA cluster, theoretically you have two main options.
- First you can add your storage capacity by adding all your proxmox nodes new harddisk/ssd/nvme, then you can add them to your current ceph storage system.
- And other option, you can attach iscsi or Nas storage to your proxmox cluster.
For that, you should upgrade your server hardware on your contabo server. You can use isci, NAS, or upgrade ssd on each nodes.
Предложения относительно реализации
I.I.
- For deployment, my recommendation is to go with docker.
- 3 VPS would be enough for this project.
- The first example is on all VPS. (example from the picture)
- MariaDB With Galera Plugin
- Maxscale
- Moodle
- MediaWiki
- Nginx Proxy (for cloudflare certificates and custom ports.)
- Another example,Two small VPS for MariaDB and one nano node for MariaDB Arbiter
- Two VPS for Maxscale, Moodle, MediaWiki, and Nginx Proxy.
- A total of 5 VPS.
- For MariaDB HA - We will use the Galera plugin and Maxscale for routing.
- For Real-Time Sync data, we would use lsyncd as needed, for some images or similar.
- They would use CloudFlare to route to Nginx proxy and free certificates.
B.T.R.N.
- 1. How do you see the implementation of this task? (Methods and applications for implementation)
I will go with Centos 7 or Centos Stream 9. The first node will be setup with all the required softwares (MediaWiki, Moodle and MariaDB). HA will be setup using Corosync and Pacemaker packages. Finally, testing and handover.
- 2. Each node has two IP addresses: IPv4 and IPv6. Do we need to buy additional IP addresses?
This should be sufficient. At max, we might need a floating IP address which will be used to access the hosted services (MediaWiki and Moodle).
- 3. How will cluster monitoring be provided?
Clusters will be monitoring using a gui interface. I will also provide set of commands which can be used to query the cluster incase gui is not accessible.
- 4. How much will we spend for the cluster? (with all the additional costs)
We might need a shared storage like SAN. I will confirm if this is really necessary. Other than this, I do not see any more additional costs.
- 5. How much time do you need to implement the cluster with report documentation?
I expect the entire setup ü documentation to take somewhere between 50 to 60 hours.
- 6. "The first node will be setup with all the required softwares (MediaWiki, Moodle and MariaDB)." -- What will be setup on other nodes?
For HA, there are two things to take care. 1) Data and 2) Services. If we use NAS then data replication we dont need to worry. So other nodes will have similar setup but with their services down and bring them up in case of fail over switch. This is taken care by Pacemaker. In summary, other nodes will have almost similar setup but will slightly different setup and its services down.
- Another approach is to setup load balancer using HAProxy and Gluster to replicate the shared volumes. You will have a easy to use web interface to manage the cluster.