Proxmox is an operating system based on Debian Linux. It’s main feature is the web interface that simplifies system virtualization and containerizationm, and allows users to merge multiple servers into one cluster and manage them from the same interface.
Proxmox doesn’t have a standard graphical user interface, it’s managed through the command line and web interface.
The web interface is available on port 8006 on the private IP address set during installation and is running as https. So, for example, you can access it with a link https://192.168.1.100:8006/
and log in with the username root
and password set during installation.
Since the web interface uses https and has no way to have a valid signed SSL certificate during installation, your browser may whine about invalid security certificates, which you should ignore without reppercussions.
When logging onto the web interface, Proxmox will ask you to register it. Registration is not required and is quite annoying, but it can be disabled.
In case of hard disk failure all data can be lost. When running servers, it is a good idea to have backups.
RAID is a type of virtualization, an abstract hard disk made of multiple physical real disks. To the operating system it appears as normal physical hard disk, but in reality it’s multiple disks that share the data between them.
Raid can be setup in BIOS, or something similar to it, without even booting the OS. There are special PCI cards we can attach to the motherboard of our servers that allows us to combine disks into one and the operating system will not be aware of the fact that they are setup in RAID.
It can also be setup as software RAID, during OS installation. Proxmox OS asks you during installation if you want to use RAID.
There are different types (levels) of RAID. Simplest is RAID 0 that combines multiple smaller disks into one big disk. This way with two hard disks of 300 GB you can create one vitual disk of 600 GB and install your OS there.
This is quite useful when you have big files you can’t split and store in multiple hard drives. A lot of files we will use can get really big, like SQL databases, or VM hard disk files.
There is also RAID 1, a way to combine two disks into one disk of same size, that stores copy of everything on both disks. In case one hard drive fails, all the data is still stored on the other one and the broken disk can be replaced. Once replaced, all data from the working disk will be copied to the new disk.
RAID has some optimization benefits as well, since the slowest process in computers is usually accessing the hard drives. When data is split in two disks, files can be stored half on each disk for RAID 0 and both are stored in full on RAID 1. This means when reading the file OS can read from both at the same time and combine the two halfs of every file. When writting in RAID 0 it is twice as fast as well, but on RAID 1 it still takes the same amout of time.
The problem with RAID 1 is in case the disk doesn’t fail completelty, but it corrupts the data, there is no way to know which one stored the correct data. That is why we can also use 3 disks, and take 2 out of 3 as correct. This is called RAID 5.
RAID 5 gives you maximum capacity of 2 disks.
You can use RAID 5 with more then 3 disks, in that case maximum storage space is N-1 disks, where N is the number of disks you have.
We can also use these levels in combination with each other. WHen using RAID 0 and RAID 1, that is called RAID 10 or RAID 01, depending on specifics of how it is set up.
In proxmox during installation you set the RAID type you want to use, if any and the file system. ZFS the newest type of filesystems and supports rollback. RAID 1 is called mirror as well in one of the options in proxmox.
In Proxmox web interface you will see local
and local-lvm
volumes at the bottom of the menu on the most left of the screen. Those are two different parts of your hard drive and each volume has specific data that it holds. VM hard drive files (volumes) are in one of those, templates and iso installation images for VM and CT creation are in another.
Virtual Machines (VM) are basiclly programs running on computer that run a whole operating system in them. The program that manages them we use is called QEMU. When creating a VM it makes a file that it uses as a hard disk for that VM and it tries to mimic a normal computer operations in that VM.
You can create VMs to use only a part of your available RAM memory, CPU cores and etc. The OS runnning on that VM will only see the resources that you assign to it when you create it. We can also change those resources alloceted to the VM after creating it as well.
We use VMs in system administration for multiple reasons. We can segragate different server services on multiple VMs, one can be just for running an email server, while another one just for running a web site. If one of the services get hacked or brake something system wide, we don’t lose all the services. You can also keep different services running on different OSes, some on Debian Linux, others on Mint, some on BSD systems, etc.
VMs can also be moved between different servers. Sometimes we need to reboot the host server, to upgrade or reinstall the host OS, or turn it off for hardwere maintanance. Meanwhile we can transfer the VM to another computer and keep it running while our main server is offline. Qemu also supports live migration of VMs between computers, VMs can keep running uninterupted while being copied from one server to another one.
Proxmox uses qemu and supports easy live migration between servers, if you create a cluster and add the other server in it.
To install a VM we can download any iso installation image and install it on the VM.
Recent computers have hardware acceleration for running VMs that is making it a lot faster.
Even more recent computers have various other hardware acceleration instructions in the CPU, such as encryption (AES) or vector operations. By default when creating a new VM in Proxmox it selects a virtual processor type that supports AES acceleration, which can cause issues on older servers and you need to select the one without it.
In Proxmox you can create a VM by loging in the web interface and clicking in the “Create VM” button on the top right of the screen.
There are many options, but most important are choosing the iso installation image, selecting the number of max CPU cores allocted to the VM, size of RAM and max hard drive space. If using old hardware, you might need to also set CPU type that doesn’t have AES acceleration.
To select iso image you need to upload it to the local
volume first by clicking on it in the left of the screen and selecting the Iso image tab. You can also download the ISO image directly to proxmox server in the same tab, that saves us time we spend uploading the iso image back to the proxmox server after downloading it.
When installtion a VM you need to go through regular OS installation. To access the VM screen you need to got to the Console
tab on the left, after selecting the new VM in the menu on the most left of the web interface screen.
Once VM is created you should set “start on boot” option to be enabled, so that this VM is automaticly started when proxmox server reboots.
Linux Containers (LXC) are similar to VMs, but are better with resource sharing between multiple containers on the same host computer. All containers share the same Linux kernel, the same as the one the host server is running on. Kernel is the core of the OS, it’s job is to manage RAM allocation between programs, schedule the programs runinng (processes) to run in parallel (more precisily to take turns running for a little bit each) and manage CPU core usage by all programs.
Unlike VMs, they need to be running Linux, but they can be different distributions like Debian, Alpine and etc. VMs can run any OS, but because of that, qemu doesn’t know when an OS really needs some resources allocated to it or is just greedy
and allocates all the RAM avaialable to it.
Generally hard disk usage is only given as needed and the VM hard disk file expanded on need, for all OSes. CPU usage is also easier to share between VMs, but RAM memory gets fully allocated as much as possible for each VM. Furthermore, running and entire OS takes more RAM anyway, then just the OSes sharing the same kernel.
Since the kernel is shared, we can’t just use the iso installation image for a Linux distro. Instead we use something we call templates. Each container management program has it’s own way of downloading templates and installing them.
Because container managment programs are familiar with these templates, they can make installtion fast and automatic. During installation we can also often provide ssh keys, root password and hostname before creating the cintainer, which makes it easier to create and mantain.
CT are smaller then VMs so the also boot quicker. However they can’t be migrated live, between servers and there are multiple technologies that aren’t compatible with each other.
LXC is a classic linux container type, but Docker is recently more popular. OSes like Proxmox don’t support Docker containers and I am not sure if LXC containers can be used.
Docker CTs can’t be run inside LXC containers, but they can run inside VMs. Probably docker gets confused when already inside another CT.
The only problem is that some projects only support docker installations officially, so if we use Proxmox we can’t install it directly on the host from the web interface and have to run it inside of VM if we want to keep it simple.
Proxmox also has some service specific templates, called turnkey, that install a full Gitea service or something similiar.
Creating a CT is a bit easier then VM. To create it you can click the Create CT
button next to Create VM
button, on the top right of the screen in proxmox web interface.
It has less options then VM creation, but the basics are the same, expcept the need of CPU type for older hardware. You need to specify the max RAM, CPU cores and hard drive space.
It also allows for adding your ssh key, root password, hostname and IP address from the creation of CT manu. There is no second installation from console like with VM, they are ready to go.
To install a CT we need to have template downloaded in the local volume under templates tab. You can click the tab to browse the availabe templates and download the ones you need.
Don’t forget to enable Start on boot
in the Options
tab on the left (the same menu that holds Console
tab).