Обсуждение:Кампусна Ферма — различия между версиями
Gary (обсуждение | вклад) (→Реакция на предложения) |
Gary (обсуждение | вклад) (→Реакция на предложения) |
||
Строка 148: | Строка 148: | ||
Все аббревиатуры, все технологии, которые упоминали кандидаты, должны появиться на Правке. Я добавил в задание Вордпресс, так как он также на Кампусной Ферме. Ранее, я планировал делать его отдельным проектом, но раз мы вынесли AVideo, есть смысл вернуть Вордпресс. | Все аббревиатуры, все технологии, которые упоминали кандидаты, должны появиться на Правке. Я добавил в задание Вордпресс, так как он также на Кампусной Ферме. Ранее, я планировал делать его отдельным проектом, но раз мы вынесли AVideo, есть смысл вернуть Вордпресс. | ||
− | По поводу архитектуры, нравится подход A.M. -- 5 VPS, из которых одна оставлена на | + | По поводу архитектуры, нравится подход A.M. -- 5 VPS, из которых одна оставлена на вход, другая -- на мониторинг. |
− | #'''Вход'''. Есть HAProxy и Nginx Plus. Что-то ещё? Код Nginx Plus закрыт? Если да, то Nginx Plus нас на этом проекте не интересует. Если код открыт, тогда мы воспользуемся им для Оплёта. | + | #'''Вход''' (распределитель запросов на общественном веб-адресе). Есть HAProxy и Nginx Plus. Что-то ещё? Код Nginx Plus закрыт? Если да, то Nginx Plus нас на этом проекте не интересует. Если код открыт, тогда мы воспользуемся им для Оплёта. |
#'''Синхронизация данных'''. У нас есть хорошо работающая Galera. Расширение Galera в сторону MaxScale и/или xPand -- что мы получим и сколько это будет стоить? | #'''Синхронизация данных'''. У нас есть хорошо работающая Galera. Расширение Galera в сторону MaxScale и/или xPand -- что мы получим и сколько это будет стоить? | ||
#'''Мониторинг'''. Не вижу дельных предложений. | #'''Мониторинг'''. Не вижу дельных предложений. |
Версия 19:48, 10 августа 2022
Содержание
Basing on Delova Farm
Since you have already had an HA cluster, theoretically you have two main options.
- First you can add your storage capacity by adding all your proxmox nodes new harddisk/ssd/nvme, then you can add them to your current ceph storage system.
- And other option, you can attach iscsi or Nas storage to your proxmox cluster.
For that, you should upgrade your server hardware on your contabo server. You can use isci, NAS, or upgrade ssd on each nodes.
Предложения относительно реализации
I.I.
- For deployment, my recommendation is to go with docker.
- 3 VPS would be enough for this project.
- The first example is on all VPS. (example from the picture)
- MariaDB With Galera Plugin
- Maxscale
- Moodle
- MediaWiki
- Nginx Proxy (for cloudflare certificates and custom ports.)
- Another example,Two small VPS for MariaDB and one nano node for MariaDB Arbiter
- Two VPS for Maxscale, Moodle, MediaWiki, and Nginx Proxy.
- A total of 5 VPS.
- For MariaDB HA - We will use the Galera plugin and Maxscale for routing.
- For Real-Time Sync data, we would use lsyncd as needed, for some images or similar.
- They would use CloudFlare to route to Nginx proxy and free certificates.
B.T.R.N.
- 1. How do you see the implementation of this task? (Methods and applications for implementation)
I will go with Centos 7 or Centos Stream 9. The first node will be setup with all the required softwares (MediaWiki, Moodle and MariaDB). HA will be setup using Corosync and Pacemaker packages. Finally, testing and handover.
- 2. Each node has two IP addresses: IPv4 and IPv6. Do we need to buy additional IP addresses?
This should be sufficient. At max, we might need a floating IP address which will be used to access the hosted services (MediaWiki and Moodle).
- 3. How will cluster monitoring be provided?
Clusters will be monitoring using a gui interface. I will also provide set of commands which can be used to query the cluster incase gui is not accessible.
- 4. How much will we spend for the cluster? (with all the additional costs)
We might need a shared storage like SAN. I will confirm if this is really necessary. Other than this, I do not see any more additional costs.
- 5. How much time do you need to implement the cluster with report documentation?
I expect the entire setup ü documentation to take somewhere between 50 to 60 hours.
- 6. "The first node will be setup with all the required softwares (MediaWiki, Moodle and MariaDB)." -- What will be setup on other nodes?
For HA, there are two things to take care. 1) Data and 2) Services. If we use NAS then data replication we dont need to worry. So other nodes will have similar setup but with their services down and bring them up in case of fail over switch. This is taken care by Pacemaker. In summary, other nodes will have almost similar setup but will slightly different setup and its services down.
- Another approach is to setup load balancer using HAProxy and Gluster to replicate the shared volumes. You will have a easy to use web interface to manage the cluster.
M.B.Y.
- Q1: For mediawiki and moodle we can use HAproxy
- Avideo depend on how many users uses your app.
https://github.com/WWBN/AVideo/wiki/AVideo-Platform-hardware-requirements
- Q2 : no
- Q3 : I am using nagios for services and networks availability, alertra.com for layer7 services, grafana for logs goaccess.io a real-time web log analyzer.
- Q4 : What you pay now + Alertra supervision costs.
- Q5 : We need to agree about all tasks first.
- I think it need a week fulltime.
- "database (not very clear)" -- synchronized on three nodes (MariaDB Galera), this decision is not final, we will consider your suggestions. Can you give more details with HAProxy? (Which node will it be installed on? What happens if that node goes down? How many nodes are needed for HA? What solutions do you offer for databases?)
For HAProxy failover : ha-diagram-animated.gif. For MariaDB, Galera is widely used, and also we can configure HAProxy for load balancing.
- "For HAProxy failover : ha-diagram-animated.gif" -- Do you need a floating IP for this? Where will HAProxy be installed? On one of the nodes? If the node with HAProxy goes down?
For a well desined architecture, we need, at least 2 HAProxy nodes. If a HAProxy node goes down, the second one will be activated ( floating IP role ) HAProxy need a separate node for security and scalabity reasons with big app. But we can use it in single node (with or without failover) and with other app (for test or smale app) depends on users number and SLA agreement. HAProxy is very reliable.
K.S.
- 1.For this project, I suggest using the InnoDB cluster with ProxySql as a middleware.
- 2.Also, we can use CheckMK or PMM for health monitoring.
- 3.It doesn't require additional IP, but it's recommended that we'll use a private network for the cluster.
- 4.All components will be open-source software. It'll be zero cost. But it would help if you have a minimum of three servers for the database node ( one primary and two secondary servers). ProxySQL can be installed on your application server.
- 5.In idol conditions, It'll take 6-8 hours for everything, including documentation.
- Why do you suggest using InnoDB? Our applications use MariaDB.
MariaDB Galera cluster is also a good choice, but the MySQL InnoDB cluster is much more scalable. Also, the MySQL InnoDB and MariaDB Galera clusters are interchangeable for most use cases( 95%). If you want, I can use the MariaDB Galera cluster. It'll give you some performance as an InnoDB cluster for 4-5 node clusters. Let me share one of my experiences with the Galera cluster. We had Galera Cluster with six dedicated servers on OVH. It was a Master-Master cluster with the 2-way write lock. But the issue was that when we added a new database node, it increased the time for Inserts/Replace queries. So, we build a new cluster with asynchronous replication, and it can handle 4x more Queries Per Second(QPS) than the previous cluster.
- Programs and databases are currently functioning. We don't mind considering your option, but how do we migrate to InnoDB? Do you have any ideas about this?
Yes, I can help you with migration because dump both MySQL and MariaDB databases are compatible. Hence, we can migrate it easily. But still, it's had to verify everything, including the database and users after migration. Similar way, I do have a sufficient experience with the MariaDB Galera cluster too.
- If we stick with MariaDB, what will allow us to get HA for our applications?
MariaDB Xpand ( MariaDB Maxscale) or ProxySQL I'll still suggest ProxySQL because of its vast number of functions. https://proxysql.com/, https://mariadb.com/products/enterprise/xpand/.
A.M.
- A.M.:
- I would like to help you achieve this project of configuring a haproxy in front of your 3 applications.
- Please confirm that you want a proxy to load balance visitors or users of the 3 apps in front of the apps. and not between the apps and the database ?
- If it is in front of the apps, let me propose you this:
- install an haproxy in front of the apps, on one another VPS (linux + haproxy).
- configure ip firewalling if any between the haproxy and the apps nodes.
- configure the 3 apps nodes as back-end of haproxy.
- configure the haproxy to get the right port of incomming visitors/users requests.
- configure ssl certificates of 3 apps on the haproxy if any.
- test the hole settings.
- Those are my ideas. Note that i prefer to do this as a fixed-price project.
- V.:
- We haven't decided what it should look like yet. We have three VPS with working programs, but we may choose to take a new three VPS for this project. We want to know more about your implementation method.
- Can you answer a few questions:
- 1 "install an haproxy in front of the apps, on one another VPS (linux + haproxy)." - What will happen if the node with HAProxy is disabled? Will applications be available?
- 2 How will cluster monitoring be provided?
- 3 Each node has two IP addresses: IPv4 and IPv6. Do we need to buy additional IP addresses?
- 4. How much time do you need to implement a cluster with report documentation?
- 5. How much will the cluster cost us with all the additional costs?
- A.M.:
- 1. "install an haproxy in front of the apps, on one another VPS (linux + haproxy)." - What will happen if the node with HAProxy is disabled? Will applications be available?
--> The node with the haproxy will be the entry point of the apps, if it is disabled for any reason, it would need to get back online ASAP. The emergency measure would be to point the DNS names on the IP of apps directly while waiting for the haproxy. Of course, having a second backup haproxy would be a more secure option.
- 2. How will cluster monitoring be provided?
--> The best solution is to have another vps just for that, external of the apps and the haproxy. It might just be a ping or http requests. It might also be a paid solution from an online services.
- 3. Each node has two IP addresses: IPv4 and IPv6. Do we need to buy additional IP addresses?
--> Yes, just one, for the haproxy. It will be used as entry point for the apps for the haproxy to send to the backend (here the apps server).
- 4. How much time do you need to implement a cluster with report documentation?
--> I estimate this about 72h.
- 5. How much will the cluster cost us with all the additional costs?
--> I estimate this arround $330.
- V.:
- We've changed the ad a bit. Hope this makes the task easier. A few more questions:
- 1. "Of course, having a second backup haproxy would be a more secure option.", "The best solution is to have another vps just for that, external of the apps and the haproxy." -- How many VPS was optimal? What will be on each VPS?
- 2. What do you think about databases? How will they be implemented?
- A.M.:
- image.
- V.:
- 1. Do we need database synchronization? If so, what will synchronize them?
- 2. You have not removed AVideo from your solution. Wouldn't that be a problem?
- 3. Will your solution pass our test?
If students are taking an exam on our Moodle course. And at this moment the node will fall. Will your decision allow you to continue taking the exam?
- 4. "The best solution is to have another vps just for that, external of the apps and the haproxy." -- What will it give us? Why is it the best monitoring solution?
- A.M.:
- 1. Do we need database synchronization? If so, what will synchronize them?
--> yes, it will be synchronized with galera/mysql cluster.
- 2. You have not removed AVideo from your solution. Wouldn't that be a problem?
--> I can remove it from the final solution.
- 3. Will your solution pass our test?
--> It will, the tests consists of shutting down two nodes and test if the apps are still available. The haproxy will then send the incomming trafic to the remaining node. The monitoring node will email you or slack you at that time.
- The second test is:
During documentation testing, we will erase the software from one, implement the rescue, and one expert will try to restore the software using your documentation ---> This means, i will install and configure all apps on all nodes. Which is beyond the haproxy config.
- I will need to re-estimate the budget if so.
Please confirm if all apps are already available or need to be installed and configured along with the haproxy.
- 4. "The best solution is to have another vps just for that, external of the apps and the haproxy." -- What will it give us? Why is it the best monitoring solution?
--> It will have an external ability to check all apps from the outside of the 3 VPS of apps.
Реакция на предложения
Все аббревиатуры, все технологии, которые упоминали кандидаты, должны появиться на Правке. Я добавил в задание Вордпресс, так как он также на Кампусной Ферме. Ранее, я планировал делать его отдельным проектом, но раз мы вынесли AVideo, есть смысл вернуть Вордпресс.
По поводу архитектуры, нравится подход A.M. -- 5 VPS, из которых одна оставлена на вход, другая -- на мониторинг.
- Вход (распределитель запросов на общественном веб-адресе). Есть HAProxy и Nginx Plus. Что-то ещё? Код Nginx Plus закрыт? Если да, то Nginx Plus нас на этом проекте не интересует. Если код открыт, тогда мы воспользуемся им для Оплёта.
- Синхронизация данных. У нас есть хорошо работающая Galera. Расширение Galera в сторону MaxScale и/или xPand -- что мы получим и сколько это будет стоить?
- Мониторинг. Не вижу дельных предложений.
- Защитные стены (firewall). Не вижу дельных предложений.
Кстати, у нас есть 30 приглашений, которыми мы пока не воспользовались. И ещё 30 от объявления на железный кластер :)