Search This Blog

Showing posts with label linux. Show all posts
Showing posts with label linux. Show all posts

Nagios Core and NagiosQL Installation on CentOS 7

This post is to document my Nagios Core 4.4.6 and NagiosQL 3.4.1 installation on CentOS 7. They are the latest available version at the time of this writing (May 2020).

The main reason of this post is I cannot find the updated instruction to install the latest version of NagiosQL on CentOS 7. When I follow the instruction on the outdated blog posts, I run into some issues. For example, the official CentOS 7 repository only has PHP 5.4, but NagiosQL 3.4.1 requires PHP 5.5.0 or later; and MySQL is no longer in CentOS repository. Another reason is I want to install the latest version packages (e.g. PHP 7.4), instead of the older version.

Nagios Core and Nagios Plugins Installation on CentOS 7

I follow the instruction on Nagios Support Knowledgebase without any major issue. The only modification is to get the latest version of Nagios Core 4.4.6, instead of 4.4.5; and the latest version of the Nagios Plugin 2.3.3, instead of 2.2.1.

NagiosQL Installation on CentOS 7

1. Install PHP 7.4 from Remi and EPEL repositories

As I mention earlier, the official CentOS 7 repository only has PHP 5.4. This doesn’t meet the NagiosQL 3.4.1 requirement. You can check the installed PHP version by php -v.

yum install epel-release
yum install http://rpms.famillecollet.com/enterprise/remi-release-7.rpm
yum install yum-utils
### install PHP 7.4
yum-config-manager --enable remi-php74
yum install php php-mcrypt php-cli php-gd php-curl php-mysql php-ldap php-zip php-fileinfo
### verify PHP 7.4 is installed
php -v
2. Install MySQL from the community repository

I follow the instruction on this post without any issue.

wget http://repo.mysql.com/mysql-community-release-el7-5.noarch.rpm
rpm -ivh mysql-community-release-el7-5.noarch.rpm
yum update
yum install mysql-server
systemctl start mysqld
### change MySQL root password, remove anonymous user accounts, disable root logins outside of localhost, and remove test databases
mysql_secure_installation
3. Install NagiosQL required packages
yum install libssh2 libssh2-devel mysql php-mysql php-pear php-devel
### install ssh2.so version 1.2 beta that support PHP 7.4
pecl install ssh2-1.2
### add extension=ssh2.so to /etc/php.ini under Dynamic Extensions
vi /etc/php.ini
4. Download and extract NagiosQL 3.4.1 archive in Nagios document root (/usr/local/nagios/share)
cd /usr/local/nagios/share
curl -L -O https://downloads.sourceforge.net/project/nagiosql/nagiosql/NagiosQL%203.4.1/nagiosql-3.4.1-git2020-01-19.tar.gz
tar xzf nagiosql-3.4.1-git2020-01-19.tar
mv nagiosql-3.4.1 webadmin
chown -R nagios:nagios webadmin/

### create the NagiosQL configuration directory
mkdir /usr/local/nagios/nagiosql
chown -R apache:apache /usr/local/nagios/nagiosql/
5. Set up PHP Timezone and restart Apache web server

See here to get the list of timezone

### set date.timezone = 'America/Los_Angeles'
vi /etc/php.ini

systemctl restart httpd
6. Start NagiosQL web installer
  • Open the URL in a browser: http://nagiosserver/nagios/webadmin/install/index.php
  • Click Start Installation
  • Verify the system meets all the requirements. Here is where I found out the PHP 5.4 in CentOS 7 doesn’t meet the requirement
  • Click Next
  • On NagiosQL Installation: Setup page
    • Enter NagiosQL DB password, root password (the root password is MySQL root password), and NagiosQL admin password
    • Check the checkboxes
      • “Drop database if already exists?”
      • “Import Nagios sample config?” (optional)
      • “Create NagiosQL config paths?”
    • set NagiosQL config path: /usr/local/nagios/nagiosql
    • set Nagios config path: /usr/local/nagios/etc
  • Click Next
  • On NagiosQL Installation: Finishing Setup page, it should be all green if everything is right
  • Delete the NagiosQL install directory
7. Access NagiosQL web UI
  • Open the URL in a browser: http://nagiosserver/nagios/webadmin
  • login with NagiosQL admin and password
8. Integrate NagiosQL with Nagios
  • Navigate to Administration -> Administration -> Config targets
  • Click Modify icon next to localhost
  • On Configuration domain administration page
    • Configuration directories section should be all set. No change is needed
    • Nagios configuration files and directories section, verify the following settings
      • Nagios base directory: /usr/local/nagios/etc/
      • Import directory: /usr/local/nagios/etc/objects/
      • Picture base directory: (blank)
      • Nagios command file: /usr/local/nagios/var/rw/nagios.cmd
      • Nagios binary file: /usr/local/nagios/bin/nagios
      • Nagios process file: /run/nagios.lock
      • Nagios config file: /usr/local/nagios/etc/nagios.cfg
      • Nagios cgi file: /usr/local/nagios/etc/cgi.cfg
      • Nagios resource file: /usr/local/nagios/etc/resource.cfg
    • Select 4.x in Nagios version
    • Leave Access group “Unrestricted access”
    • Check Active checkbox
    • Click Save
  • Edit Nagios Core configuration file
    • Edit Nagios configuration file /usr/local/nagios/etc/nagios.cfg
    • Comment all cfg_file and cfg_dir entries
    • Add the following cfg_file and cfg_dir entries
    cfg_file=/usr/local/nagios/nagiosql/commands.cfg
    cfg_file=/usr/local/nagios/nagiosql/contactgroups.cfg
    cfg_file=/usr/local/nagios/nagiosql/contacts.cfg
    cfg_file=/usr/local/nagios/nagiosql/contacttemplates.cfg
    cfg_file=/usr/local/nagios/nagiosql/hostdependencies.cfg
    cfg_file=/usr/local/nagios/nagiosql/hostescalations.cfg
    cfg_file=/usr/local/nagios/nagiosql/hostextinfo.cfg
    cfg_file=/usr/local/nagios/nagiosql/hostgroups.cfg
    cfg_file=/usr/local/nagios/nagiosql/hosttemplates.cfg
    cfg_file=/usr/local/nagios/nagiosql/servicedependencies.cfg
    cfg_file=/usr/local/nagios/nagiosql/serviceescalations.cfg
    cfg_file=/usr/local/nagios/nagiosql/serviceextinfo.cfg
    cfg_file=/usr/local/nagios/nagiosql/servicegroups.cfg
    cfg_file=/usr/local/nagios/nagiosql/servicetemplates.cfg
    cfg_file=/usr/local/nagios/nagiosql/timeperiods.cfg
    
    cfg_dir=/usr/local/nagios/nagiosql/hosts
    cfg_dir=/usr/local/nagios/nagiosql/services
9. Verify Nagios Core config files

/usr/local/nagios/bin/nagios -v /usr/local/nagios/etc/nagios.cfg

When I run the above command, I get the error message of missing command.cfg file, etc. I go back to NagiosQL web UI, in each main section Supervision, Alerting, Commands, and Specialties, click “Write config file” to generate these files. Then the command reports no error or warning.

10. Restart Nagios Core service

systemctl restart nagios

Now Nagios Core and NagiosQL are successfully set up. You can view the monitoring status in the Nagios web UI and modify the monitoring via NagiosQL web UI.

Using growpart to extend a Linux non-LVM partition

This post is about using growpart to extend a non-LVM Linux partition. For extending a LVM volume, see my other posts - Extend a Linux LVM Volume on a VM part1, part2, and part3.

When increasing the size of a Linux parition, it normally requries the following procedures:

  1. Increase the size of the physical or virtual hard drive
  2. Extend the partition to added drive space
  3. Resize the file system to extended partition

Among these steps, step #2 usually sounds risky. It requires to delete the existing partition and recreating it with the new size. Most people are not confortable to do that, and they end up with adding a new bigger drive, creating a new bigger partition, formatting the new partition, and copying the files from the old partition to the new bigger. This not only is time comsuming, but only requires more physical or vritual disk space.

Recently I learned about growpart that makes step #2 much simpler and error free. Growpart may not be installed by default, but it should be available on your distro. The following is the commands to increase a Linux partition (/dev/sdb1) on CentOS.

  • sudo yum install cloud-utils-growpart
  • increase the disk size (e.g. /dev/sdb) and reboot
  • sudo growpart /dev/sdb 1 ### there is a space between sdb and 1. 1 is the partition number
  • sudo resize2fs /dev/sdb1

See growpart man page for more info.

Extend a Linux LVM Volume on a VM - Part 3

This is the part 3 of extending a Linux LVM volume. see part 1 and part 2.

This part 3 is similar to part 2 when the partition as a PV. Instead of creating a new partition on the free disk space (like in part 2), delete the last partition on the disk and recreate it. This is useful when all the primary partitions (1 - 4) are already in use.

Extend a LVM when the partition as a PV

  1. Increase the VM’s hard disk size in vSphere Client

    • If there is a VM snapshot on the disk, its size cannot be changed. Remove all the snapshots first
    • After increasing the disk size, take a snapshot as the backup
  2. Rescan the SCSI bus to verify the OS see the new space on the disk
    • ls /sys/class/scsi_host/
    • echo “- - -“ > /sys/class/scsci_host/<host_name>/scan
    • tail -f /var/log/message
    • or
    • ls /sys/class/scsi_disk/
    • each ‘1’ > /sys/class/scsi_disk/<0\:0\:0\:0>/device/rescan
    • tail -f /var/log/message
    • fdisk -l
  3. Prepare the disk partition
    • fdisk -l
    • fdisk </dev/sdb>
    • p - print the partition table, note the last partition number in use
    • d - delete parition
    • p - primary partition
    • <X> - partition number, enter the last partition number from the previous p - print the partition table command 
    • n - add a new partition
    • p - primary partition
    • <X> - partition number, enter the partition number was deleted in the previous step 
    • default - the begining of the cylinder in the original partition
    • default - the last of the free cylinder
    • t - change a partition’s system id
    • <X> - partition number, enter the partition number was recreated in the previous step
    • 8e - Linux LVM
    • w - write table to disk and exit
    • fdisk -l to verify the new partition size
  4. Update partition table changes to kernel
    • reboot
    • or partprobe </dev/sdb>
    • Update (04/18/2016): In RHEL 6, partprobe will only trigger the OS to update the partitions on a disk that none of its partitions are in use (e.g. mounted). If any partition on a disk is in use, partprobe will not trigger the OS to update partition in the system because it it considered unsafe in some situations. So a reboot is required. see “How to use a new partition in RHEL6 without reboot?”
  5. Resize the PV
    • pvresize </dev/sdb3>
  6. Verifty the VG automatically sees the new space 
    • vgs
  7. Extend the LV
    • lvextend -l +100%FREE</dev/volume_group_name>/<logical_volume_name>
    • or lvextend -L+<size> /dev/<volume_group_name>/<logical_volume_name>
    • lvs
  8. Resize the file system
    • resize2fs /dev/<volume_group_name>/<logical_volume_name>
    • df -h
  9. Remove the VM snapshot once confirming the data intact

Extend a Linux LVM Volume on a VM - Part 2

This is the part 2 of extending a Linux LVM volume. See part 1 when the entire disk as a PV.

Extend a LVM when the partition as a PV

  1. Increase the VM’s hard disk size in vSphere Client

    • If there is a VM snapshot on the disk, its size cannot be changed. Remove all the snapshots first
    • After increasing the disk size, take a snapshot as the backup
  2. Rescan the SCSI bus to verify the OS see the new space on the disk
    • ls /sys/class/scsi_host/
    • echo “- - -“ > /sys/class/scsci_host/<host_name>/scan
    • tail -f /var/log/message
    • or
    • ls /sys/class/scsi_disk/
    • each ‘1’ > /sys/class/scsi_disk/<0\:0\:0\:0>/device/rescan
    • tail -f /var/log/message
    • fdisk -l
  3. Prepare the disk partition
    • fdisk -l
    • fdisk </dev/sdb>
    • p - print the partition table, note the next available partition number
    • n - add a new parition
    • p - primary partition
    • <X> - partition number, enter the next available partition number from the previous p - print the partition table command 
    • default - the begining of the free cylinder
    • default - the last of the free cylinder
    • t - change a partition’s system id
    • <X> - partition number, enter the next available partition number from the previous p - print the partition table command
    • 8e - Linux LVM
    • w - write table to disk and exit
    • fdisk -l to verify the new partition
  4. Update partition table changes to kernel
    • reboot
    • or partprobe </dev/sdb>
    • Update (04/18/2016): In RHEL 6, partprobe will only trigger the OS to update the partitions on a disk that none of its partitions are in use (e.g. mounted). If any partition on a disk is in use, partprobe will not trigger the OS to update partition in the system because it it considered unsafe in some situations. So a reboot is required. see “How to use a new partition in RHEL6 without reboot?”
  5. Initializ the disk partition
    • pvcreate </dev/sdb3>
  6. Extend the VG
    • use vgdispaly to determine the volume group name
    • vgextend <volume_group_name> </dev/sdb3>
    • vgs
  7. Extend the LV
    • lvextend -l +100%FREE</dev/volume_group_name>/<logical_volume_name>
    • or lvextend -L+<size> /dev/<volume_group_name>/<logical_volume_name>
    • lvs
  8. Resize the file system
    • resize2fs /dev/<volume_group_name>/<logical_volume_name>
    • df -h
  9. Remove the VM snapshot once confirming the data intact

Extend a Linux LVM Volume on a VM - Part 1

As I mentioned in the recent Linux LVM post, there are two ways to prepare the physical volume (PV)

  • the entire disk as a PV (not recommended)
  • or creating a partition on the disk and the partition as a PV.

The step to extend a LVM volume are different on these two configuration.

Extend a LVM when the entire disk as a PV

  1. Increase the VM’s hard disk size in vSphere Client
    • If there is a VM snapshot on the disk, its size cannot be changed. Remove all the snapshots first
    • After increasing the disk size, take a snapshot as the backup
  2. Rescan the SCSI bus to verify the OS see the new space on the disk
    • ls /sys/class/scsi_host/
    • echo “- - -“ > /sys/class/scsci_host/<host_name>/scan
    • tail -f /var/log/message
    • or
    • ls /sys/class/scsi_disk/
    • each ‘1’ > /sys/class/scsi_disk/<0\:0\:0\:0>/device/rescan
    • tail -f /var/log/message
    • fdisk -l
  3. Resize the PV
    • pvs
    • pvresize </dev/sdb>
    • image
  4. Verify the VG automatically sees the new space
    • vgs
    • image
    • compare the vg_app VFree size in this screen with the one in step #3
  5. Extend the LV
    • lvextend -l +100%FREE </dev/volume_group_name>/<logical_volume_name>
    • or lvextend -L+<size> /dev/<volume_group_name>/<logical_volume_name>
    • lvs
    • image
  6. Resize the file system
    • resize2fs /dev/<volume_group_name>/<logical_volume_name>
    • df -h
    • image
  7. Remove the VM snapshot once confirming the data intact

Linux Logical Volume Management (LVM) and Setup

LVM Layout

LVM Logical Volume Components

(source: RedHat Logical Volume Manager Administration)

LVM Components (from bottom to top)

  • Hard Disks
  • Partitions
    • LVM will work fine with the entire disk (without creating a partition) as a PV. But this is not recommended.
    • Other OS or disk utility (e.g. fdisk) will not recognize the LVM metadata and display the disk as free, so the disk is likely being overwritten by mistake.
    • The best pratice is to create a partition on the hard disk, then initialize the partition as a PV.
    • It is generally recommended that creating a single partition that covers the whole disk. (see RedHat Logical Volumen Manager Administration)
    • Using an entire disk a PV or using a partition as a PV will have a different procedure when growing the hard disk size in the VM (see “Expanding LVM Storage”)
  • Physical Volumnes
  • Volume Group
  • Logical Volumes
  • File Systems

LVM Setup

  1. Add a new hard disk
  2. Rescan the SCSI bus
    • ls /sys/class/scsi_host/
    • echo “- - -“ > /sys/class/scsci_host/<host_name>/scan
    • tail -f /var/log/message 
    • or
    • ls /sys/class/scsi_disk/
    • each ‘1’ > /sys/class/scsi_disk/<0\:0\:0\:0>/device/rescan
    • tail -f /var/log/message
  3. Prepare the disk partition
    • fdisk -l
    • fdisk </dev/sdb>
    • n - add a new parition
    • p - primary partition
    • 1 - partition number
    • default - first cylinder
    • default - last cylinder
    • t - change a partition’s system id
    • 1 - partition number
    • 8e - Linux LVM
    • w - write table to disk and exit
    • fdisk -l to verify the new partition
  4. Update partition table changes to kernel
    • reboot
    • or partprobe </dev/sdb>
    • Update (04/18/2016): In RHEL 6, partprobe will only trigger the OS to update the partitions on a disk that none of its partitions are in use (e.g. mounted). If any partition on a disk is in use, partprobe will not trigger the OS to update partition in the system because it it considered unsafe in some situations. So a reboot is required. see “How to use a new partition in RHEL6 without reboot?”
  5. Initialize disks or disk partitions
    • pvcreate </dev/sdb> - skip step #3, use the entire disk as a PV, not recommended
    • pvcreate </dev/sdb1> - use the partition created in step #3 as a PV, best practice
    • pvdisplay
    • pvs
  6. Create a volume group
    • vgcreate <volume_group_name> </dev/sdb1>
    • vgdisplay
    • vgs
  7. Create a logical volume
    • lvcreate --name <logical_volume_name> --size <size> <volume_group_name>
    • or lvcreate -n <logical_volume_name> -L <size> <volume_group_name>
    • lvdisplay
    • lvs
  8. Create the file system on the logical volume
    • mkfs.ext4 /dev/<volume_group_name>/<logical_volume_name>
  9. Mount the new volume
    • mkdir </mount_point>
    • mount /dev/<volume_group_name>/<logical_volume_name> </mount_point>
  10. Add the new mount point in /etc/fstab
    • vi /etc/fstab
    • /dev/<volume_group_name>/<logical_volume_name> </mount_point> ext4 defaults 0 0

Use WinSCP to Transfer Files in vCSA 6.7

This is a quick update on my previous post “ Use WinSCP to Transfer Files in vCSA 6.5 ”. When I try the same SFTP server setting in vCSA 6.7...