Zenoss install

Prepare host

  • Prepare raidi devices:

parted –script /dev/sdc “mklabel gpt” parted –script /dev/sdc “mkpart primary 0% 100%” parted –script /dev/sdc “set 1 raid on”

parted –script /dev/sdd “mklabel gpt” parted –script /dev/sdd “mkpart primary 0% 100%” parted –script /dev/sdd “set 1 raid on”

parted –script /dev/sde “mklabel gpt” parted –script /dev/sde “mkpart primary 0% 100%” parted –script /dev/sde “set 1 raid on”

parted –script /dev/sdf “mklabel gpt” parted –script /dev/sdf “mkpart primary 0% 100%” parted –script /dev/sdf “set 1 raid on”

  • Create raid 1 for data storage

mdadm –create /dev/md0 –level=raid0 –raid-devices=4 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1

  • Prepare /dev/md0 for physical volume

pvcreate /dev/md0

  • Create volume group for zenoss

vgcreate data /dev/md0

  • Create logical volumes

lvcreate -L 50G -n zenoss_docker data lvcreate -L 50G -n zenoss_cc_internal data lvcreate -L 200G -n zenoss_application_data data lvcreate -L 150G -n zenoss_data_backups data

  1. Install and configure iscsi on lvm

yum -y install targetcli

  • Set iSCSI initiator name

echo “InitiatorName=iqn.2017-05.co.angani-iscsi:9add7c1d8360” > /etc/iscsi/initiatorname.iscsi systemctl restart iscsid /sbin/iscsi-iname

  • Create and export storage object backed by logical volume

targetcli /backstores/block create dev=/dev/mapper/data-zenoss_docker name=zenoss_docker targetcli /backstores/block create dev=/dev/mapper/data-zenoss_application_data name=zenoss_application_data targetcli /backstores/block create dev=/dev/mapper/data-zenoss_cc_internal name=zenoss_cc_internal targetcli /backstores/block create dev=/dev/mapper/data-zenoss_data_backups name=zenoss_data_backups targetcli /backstores/block ls

  • Create an IQN for the iSCSI target

targetcli /iscsi create iqn.2017-05.co.angani-iscsi:target00 targetcli /iscsi ls

  • Configure ACLs for the TPG
  • This ACL allows zenoss-01 server to access the target’s IQN

targetcli /iscsi/iqn.2017-05.co.angani-iscsi:target00/tpg1/acls create iqn.2017-05.co.angani-iscsi:zenoss-01

  • Configure CHAP authentication by creating initiator users to allow access to backend storage

targetcli /iscsi/iqn.2017-05.co.angani-iscsi:target00/tpg1/acls/iqn.2017-05.co.angani-iscsi:zenoss-01 set auth userid=zenoss password=Oopequaiquieng5

  • Create the LUNs needed to associate a block device with a specific TPG.

targetcli /iscsi/iqn.2017-05.co.angani-iscsi:target00/tpg1/luns create /backstores/block/zenoss_docker targetcli /iscsi/iqn.2017-05.co.angani-iscsi:target00/tpg1/luns create /backstores/block/zenoss_application_data/ targetcli /iscsi/iqn.2017-05.co.angani-iscsi:target00/tpg1/luns create /backstores/block/zenoss_cc_internal/ targetcli /iscsi/iqn.2017-05.co.angani-iscsi:target00/tpg1/luns create /backstores/block/zenoss_data_backups/

  • Configure a target to offer services on specific ip (optional)
  • Default is 0.0.0.0:3260

targetcli /iscsi/iqn.2017-05.co.angani-iscsi:target00/tpg1/portals/ crate 192.168.70.80

  • Save configs to /etc/target/saveconfig.json

targetcli saveconfig ss -na | grep 3260

  • Open firewall port for iscsi

firewall-cmd –add-port 3260/tcp –permanent firewall-cmd –reload

For iptables:

iptables -I INPUT -p tcp -m tcp –dport 3260 -j ACCEPT iptables-save /etc/sysconfig/iptables

service iptables save

iptables -A INPUT -i eth1 -s 10.1.212.51 -p tcp -m tcp –sport 3260 -j ACCEPT

iptables -A INPUT -p tcp -m tcp –sport 3260 -j ACCEPT

  • Start and enable target service

systemctl enable target systemctl start target

Setup iSCSI Initiator

yum install -y iscsi-initiator-utils

  • Set InitiatorName

echo “InitiatorName=iqn.2017-05.co.angani-iscsi:zenoss-01” > /etc/iscsi/initiatorname.iscsi systemctl restart iscsid

  • Configure initiator authentication

sed -i ’s/#node.session.auth.username = username/node.session.auth.username = zenoss/g’ /etc/iscsi/iscsid.conf sed -i ’s/#node.session.auth.password = password/node.session.auth.password = Oopequaiquieng5/g’ /etc/iscsi/iscsid.conf

  • Discover targets

iscsiadm -m discovery -t sendtargets -p 192.168.70.80 –discover

192.168.70.80:3260,1 iqn.2017-05.co.angani-iscsi:target00

  • Login to target

iscsiadm -m node -T iqn.2017-05.co.angani-iscsi:target00 –login

#iscsiadm -m node –login

  • Find the iSCSI disk name.

grep “Attached SCSI” /var/log/messages

  • Once the connection is established, both session and node details can be checked as follows.

iscsiadm -m session -o show iscsiadm –mode node -P 1

  • Mounting the iSCSI Devices. First list the available iSCSI devices

lsscsi

[0:0:0:0] cd/dvd QEMU QEMU DVD-ROM 1.5. /dev/sr0 [2:0:0:0] disk LIO-ORG zenoss_docker 4.0 /dev/sda [2:0:0:1] disk LIO-ORG zenoss_applicat 4.0 /dev/sdb [2:0:0:2] disk LIO-ORG zenoss_cc_inter 4.0 /dev/sdc [2:0:0:3] disk LIO-ORG zenoss_data_bac 4.0 /dev/sdd

  • Do the same with lsblk command:

lsblk

NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 50G 0 disk sdb 8:16 0 200G 0 disk sdc 8:32 0 50G 0 disk sdd 8:48 0 150G 0 disk

Begin zenoss setup

  • Perform the procedures in this section to install Control Center and Zenoss Core on a master host.

Prepare storage

/dev/sda > zenoss_docker data > 50G /dev/sdb > zenoss_application_data data > 200G /dev/sdc > zenoss_cc_internal data > 50G > /opt/serviced/var/isvcs /dev/sdd > zenoss_data_backups > 150G

  1. Create filesystem for control center internal services

parted –script /dev/sdc “mklabel gpt” parted –script /dev/sdc “mkpart primary 0% 100%” mkfs.xfs /dev/sdc1 echo “/dev/sdc1 /opt/serviced/var/isvcs xfs defaults 0 0” >> /etc/fstab mkdir -p /opt/serviced/var/isvcs mount -a && mount | grep isvcs

  1. Creating a filesystem for application data backups

parted –script /dev/sdd “mklabel gpt” parted –script /dev/sdd “mkpart primary 0% 100%” mkfs.xfs /dev/sdd1 echo “/dev/sdd1 /opt/serviced/var/backups xfs defaults 0 0” >> /etc/fstab mkdir -p /opt/serviced/var/backups mount -a && mount | grep backups

  • Create an override for the default rules policy of the dynamic device management daemon.

ln -s /dev/null /etc/udev/rules.d/80-net-name-slot.rules

Preparing the master host operating system

  • Persist log data storage

mkdir -p /var/log/journal && systemctl restart systemd-journald

Stop and disable firewalld

systemctl stop firewalld && systemctl disable firewalld mkdir -p /var/log/journal && systemctl restart systemd-journald

Disable SELinux

sed -i ’s/^SELINUX=.*/SELINUX=disabled/g’ /etc/selinux/config grep ‘^SELINUX=’ /etc/selinux/config

  • Install dnsmasq package:

systemctl enable dnsmasq && systemctl start dnsmasq

vim /etc/dnsmasq.conf

  • uncomment:

    domain-needed

    bogus-priv

    local=/localnet/ > Remove net, and then remove the number sign character (#) from the beginning of the line

    domain=example.com

  • Save and Restart the dnsmasq service.

systemctl restart dnsmasq

  • Install and configure the NTP package.

yum install -y ntp && systemctl enable ntpd

  • Configure ntpd to start when the system starts.

echo “systemctl start ntpd” >> /etc/rc.d/rc.local chmod +x /etc/rc.d/rc.local

  • Start ntpd.

systemctl start ntpd

  • Install Docker

cat > /etc/yum.repos.d/docker.repo <<-EOF [dockerrepo] name=Docker Repository baseurl=https://yum.dockerproject.org/repo/main/centos/7 enabled=1 gpgcheck=1 gpgkey=https://yum.dockerproject.org/gpg EOF

  • Update the repository cache.

yum clean all && yum makecache fast

  • Install docker 1.12.1

yum install -y docker-engine-1.12.1 systemctl enable docker

Disable docker repository

sed -i ’s/enabled=1/enabled=0/g’ /etc/yum.repos.d/docker.repo grep ‘^enabled’ /etc/yum.repos.d/docker.repo

  • Download and install the Zenoss repository package.

yum install http://get.zenoss.io/yum/zenoss-repo-1-1.x86_64.rpm yum clean all reboot

Installing Control Center

yum clean all && yum makecache fast yum –enablerepo=zenoss-stable install -y serviced-1.3.1

  • Enable automatic startup.

systemctl enable serviced

  • Make a backup copy of the Control Center configuration file.

cp /etc/default/serviced /etc/default/serviced-1.3.1-orig

  • Set the backup file permissions to read-only.

chmod 0440 /etc/default/serviced-1.3.1-orig

  • Add a drop-in file for the NFS service.

mkdir -p /etc/systemd/system/nfs-server.service.d

cat < /etc/systemd/system/nfs-server.service.d/nfs-server.conf [Unit] Requires= Requires= network.target proc-fs-nfsd.mount rpcbind.service Requires= nfs-mountd.service EOF

systemctl daemon-reload

Configuring Docker Engine

  • Create a symbolic link for the Docker temporary directory
  • Docker uses its temporary directory to spool images.
  • The default directory is /var/lib/docker/tmp.
  • The following command specifies the same directory that Control Center uses, /tmp.
  • You can specify any directory that has a minimum of 10GB of unused space.
  • Create the docker directory in /var/lib.

mkdir /var/lib/docker

  • Create the link to /tmp.

ln -s /tmp /var/lib/docker/tmp

  • Create a systemd drop-in file for Docker Engine.
  • Create the override directory.

mkdir -p /etc/systemd/system/docker.service.d

  • Create the unit drop-in file.

cat < /etc/systemd/system/docker.service.d/docker.conf [Service] TimeoutSec=300 EnvironmentFile=-/etc/sysconfig/docker ExecStart= ExecStart=/usr/bin/dockerd \$OPTIONS TasksMax=infinity EOF

  • Reload the systemd manager configuration.

systemctl daemon-reload

Create an LVM thin pool for Docker data. - using serviced-storage command, - To use an entire block device or partition for the thin pool, replace Device-Path with the device path

serviced-storage create-thin-pool docker /dev/sda

/dev/mapper/docker-docker–pool

volume group: docker

logical volume: docker-pool

To use 50GB of an LVM volume group for the thin pool, replace Volume-Group with the name of an LVM volume group:

serviced-storage create-thin-pool –size=50G docker Volume-Group

  • Configure and start the Docker service.
  • Create a variable for the name of the Docker thin pool.
  • Replace Thin-Pool-Device with the name of the thin pool device created in the previous step:

myPool=“/dev/mapper/docker-docker–pool”

  • Create variables for adding arguments to the Docker configuration file. The –exec-opt argument is a workaround for a Docker issue on RHEL/CentOS 7.x systems.

myDriver=“–storage-driver devicemapper” myLog=“–log-level=error” myFix=“–exec-opt native.cgroupdriver=cgroupfs” myMount=“–storage-opt dm.mountopt=discard” myFlag=“–storage-opt dm.thinpooldev=$myPool”

  • Add the arguments to the Docker configuration file.

echo ‘OPTIONS=“’$myLog $myDriver $myFix $myMount $myFlag’”’
>> /etc/sysconfig/docker

  • Start or restart Docker.

systemctl restart docker docker info | grep ‘Pool Name’ Pool Name: docker-docker–pool

  • Configure name resolution in containers.
  • Each time it starts, docker selects an IPv4 subnet for its virtual Ethernet bridge.
  • The selection can change; this step ensures consistency.
  • Identify the IPv4 subnet and netmask docker has selected for its virtual Ethernet bridge.

ip addr show docker0 | grep inet

  • Open /etc/sysconfig/docker in a text editor.
  • Add the following flags to the end of the OPTIONS declaration.
  • Replace Bridge-Subnet with the IPv4 subnet docker selected for its virtual bridge:

–dns=Bridge-Subnet –bip=Bridge-Subnet/16

  • For example, if the bridge subnet is 172.17.0.1, add the following flags:

–dns=172.17.0.1 –bip=172.17.0.116

  • Restart the Docker service.

systemctl restart docker

Creating the application data thin pool

  • Use this procedure to create a thin pool for application data storage.

serviced-storage create-thin-pool serviced /dev/sdb

/dev/mapper/serviced-serviced–pool

LVM volume group:

serviced-storage create-thin-pool –size=200G serviced Volume-Group

  • Edit the storage variable in the control center configuration file

vim /etc/default/serviced

  • uncomment SERVICED_FS_TYPE, to be

SERVICED_FS_TYPE=devicemapper SERVICED_DM_THINPOOLDEV=/dev/mapper/serviced-serviced–pool

Configuring and starting the master host2

  • This scripts in the following list are installed when Control Center is installed, and are started either daily or weekly by anacron.
  1. /etc/cron.daily/serviced > invokes logrotate daily, to manage the /var/log/serviced.access.log file.

This script is required on the master host and on all delegate hosts.

  1. /etc/cron.weekly/serviced-fstrim >
  • This script invokes fstrim weekly, to reclaim unused blocks in the application data thin pool.
  • The life span of a solid-state drive (SSD) degrades when fstrim is run too frequently.
  • If the block storage of the application data thin pool is an SSD, you can reduce the frequency at which this script is invoked, as long as the thin pool never runs out of free space.
  • An identical copy of this script is located in /opt/serviced/bin.
  • This script is required on the master host only
  1. /etc/cron.weekly/serviced-zenossdbpack
  • This script invokes a serviced command weekly, which in turn invokes the database maintenance script for a Zenoss application.
  • If the application is not installed or is offline, the command fails.
  • This script is required on the master host only.

User access control

  • Control Center provides a browser interface and a command-line interface.
  • To gain access to the Control Center browser interface, users must have login accounts on the Control Center master host.
  • In addition, users must be members of the Control Center administrative group, which by default is the system group, wheel.
  • To enhance security, you may change the administrative group from wheel to any non-system group.
  • To use the Control Center command-line interface (CLI) on a Control Center cluster host, a user must have login account on the host, and the account must be a member of the serviced group.
  • Pluggable Authentication Modules (PAM) is supported and recommended for enabling access to both the browser interface and the command-line interface.

Adding users to the default administrative group

usermod -aG wheel jmutai

Configuring a regular group as the Control Center administrative group

  • Use this procedure to change the default administrative group of Control Center from wheel to a non-system group.

SERVICED_ADMIN_GROUP > default wheel - The name of the Linux group on the serviced master host whose members are authorized to use the serviced browser interface. - You may replace the default group with a group that does not have superuser privileges.

SERVICED_ALLOW_ROOT_LOGIN> Default: 1 (true)

  • Determines whether the root user account on the serviced master host may be used to gain access to the serviced browser interface.
  • Create a variable for the group to designate as the administrative group.

ipa group-add zenoss_ldap –desc=“Default group for zenoss users” ipa group-add-member zenoss_ldap –groups=infra –group=support –groups=management –groups=commercial

  • On control center remove sssd cache and restart sssd

rm -rf /var/lib/sss/db/cache_nbo.angani.co.ldb systemctl restart sssd

myGROUP=zenoss_ldap

  • Add one or more existing users to the group ( Only if using local user groups )

usermod -aG $myGROUP User

  • Specify the new administrative group in the serviced configuration file.

vim /etc/default/serviced

SERVICED_ADMIN_GROUP=zenoss_ldap

  • Optional: Prevent the root user from gaining access to the Control Center browser interface, if desired.

vim /etc/default/serviced SERVICED_ALLOW_ROOT_LOGIN

  • set to 0 to disable root user control center login

SERVICED_ALLOW_ROOT_LOGIN=0

Enabling use of the command-line interface

usermod -aG serviced User

Configuring the base size device for tenant data storage

  • Use this procedure to configure the base size of virtual storage devices for tenants in the application data thin pool.
  • The base size is used each time a tenant device is created.
  • In particular, the first time serviced starts, it creates the base size device and then creates a tenant device from the base size device.

  • Identify the size of the thin pool for application data

lvs –options=lv_name,lv_size | grep serviced-pool

  • Edit storage variables in the Control Center configuration file

vim /etc/default/serviced

SERVICED_DM_BASESIZE=100G # 50% of the size of the thin pool for application data

  • Then verify the settings

grep -E ‘^\b*SERVICED’ /etc/default/serviced

Setting the host role to master

SERVICED_MASTER > Default: 1 (true)

  • Assign the role of a serviced instance, either master or delegate
  • The master runs the application services scheduler and other internal services
  • Delegates run the application services assigned to the resource pool to which they belong
  • Only one serviced instance can be the master; all other instances must be delegates.
  • The default value assigns the master role.
  • To assign the delegate role, set the value to 0 (false).
  • This variable must be explicitly set on all Control Center cluster hosts
  • verify

grep -E ‘^\b*SERVICED’ /etc/default/serviced

Starting Control Center for the first time

systemctl start serviced

To monitor progress, enter the following command:

journalctl -flu serviced -o cat

Running containers

Base image:

zenoss/serviced-isvcs:v56

Containers

serviced-isvcs_logstash > 0.0.0.0:5042-5043->5042-5043/tcp, 127.0.0.1:9292->9292/tcp serviced-isvcs_kibana > 127.0.0.1:5601->5601/tcp serviced-isvcs_elasticsearch-serviced > 127.0.0.1:9200->9200/tcp serviced-isvcs_zookeeper > 0.0.0.0:2181->2181/tcp, 0.0.0.0:2888->2888/tcp, 0.0.0.0:3888->3888/tcp, 127.0.0.1:12181->12181/tcp serviced-isvcs_elasticsearch-logstash > 127.0.0.1:9100->9100/tcp serviced-isvcs_opentsdb > 0.0.0.0:4242->4242/tcp, 127.0.0.1:8888->8888/tcp, 127.0.0.1:9090->9090/tcp, 127.0.0.1:58443->58443/tcp, 0.0.0.0:8443->8443/tcp, 127.0.0.1:58888->58888/tcp serviced-isvcs_docker-registry > 0.0.0.0:5000->5000/tcp

  • On hosts with internet access, the serviced daemon invokes docker to pull its internal services images from Docker Hub.•
  • The Control Center browser and command-line interfaces are unavailable until the images are installed and tagged, and the services are started.
  • The process takes about 5 minutes.
  • On hosts without internet access, the serviced daemon tags images in the local registry and starts its internal services.
  • The Control Center browser and command-line interfaces are unavailable for about 3 minutes.
  • When the message Host Master successfully started is displayed, Control Center is ready for the next procedure.

Until the master host is added to a pool, all serviced CLI commands that use the RPC server must be run as root.

Adding the master host to a resource pool

Adding the master host to a resource pool (single-host deployments)

  • Use this procedure to add the master host to a resource pool.
  • For single-host deployments, the master host is added to the default pool.
  • This is done on the master host
  • Add the master host to the default resource pool.
  • Replace Hostname-Or-IP with the hostname or IP address of the Control Center master host:

serviced host add –register Hostname-Or-IP:4979 default

serviced host add –register zenoss-01.eadc.nbo.angani.co:4979 default

  • If you use a hostname, all of the hosts in your Control Center cluster must be able to resolve the name,• either through an entry in /etc/hosts or through a nameserver on the network.

Stopping Control Center (single-host deployment)

  • Show the status of running services.

serviced service status

  • Stop the top-level service

serviced service stop Service

  • Monitor the stop

serviced service status

  • Stop the Control Center service

systemctl stop serviced

  • Ensure that no containers remain in the local repository

docker ps -qa

  • Remove all remaining containers.

docker ps -qa | xargs –no-run-if-empty docker rm -fv docker ps -qa

  • Disable the automatic startup of serviced.

systemctl disable serviced systemctl enable serviced

Install Zenoss Core on the Master host

  • Deploying Zenoss Core
  • This procedure adds the Zenoss Core application to the list of applications that Control Center manages.

yum –enablerepo=zenoss-stable install -y zenoss-core-service

Share Comments
comments powered by Disqus