Thaiadmin

ขั้นตอนการติดตั้ง CEPH STORAGE บน ubuntu

0 สมาชิก และ 1 บุคคลทั่วไป กำลังดูหัวข้อนี้

ออฟไลน์ t.samit

  • *
  • 14
  • 0
ขั้นตอนการติดตั้ง CEPH STORAGE บน ubuntu
« เมื่อ: 5 กรกฎาคม 2016, 18:49:24 »
ลองไปศึกษากันดูนะครับ มีประโยชน์มากๆ ประหยัดและดีครับ  O0  O0  O0
ขั้นตอนการติดตั้งไม่อยากเลย แล้วจะมาอัพเดทขั้นตอนการใช้งานที่สำคัญในโอกาศหน้านะครับ

First Setup NIC for ceph storage

ceph-admin (node to manage all ceph node withtin a cluster):
nic1 -> 192.168.1.2
nic2 -> 192.168.10.2

ceph-mon (Monitor nodes)
mon1
nic1 -> 192.168.1.3

mon2
nic1 -> 192.168.1.4

ceph-mds (metadata server nodes)
mds1
nic1 -> 192.168.1.5

mds2
nic1 -> 192.168.1.6

ceph-osd (Object store daemon, the storage cluster)
osd1
nic1 -> 192.168.1.7
nic2 -> 192.168.10.7

osd2
nic1 -> 192.168.1.8
nic2 -> 192.168.10.8

osd3
nic1 -> 192.168.1.9
nic2 -> 192.168.10.9

osd4
nic1 -> 192.168.1.10
nic2 -> 192.168.10.10

Install Ceph deploy

The ceph-deploy tool must only be installed on the admin node. Access to the other nodes for configuration purposes will be handled by ceph-deploy over SSH (with keys).

Add Ceph repository to your apt configuration, replace {ceph-stable-release} with the Ceph release name that you want to install (e.g., emperor, firefly, …)
Install the trusted key with

$ wget -q -O- '[[https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc|​https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc]]' | sudo apt-key add -
$ echo deb [[http://download.ceph.com/debian-hammer/|​http://download.ceph.com/debian-hammer/]] trusty main | sudo tee /etc/apt/sources.list.d/ceph.list
Install ceph-deploy

$ sudo apt-get update
$ sudo apt-get install ceph-deploy
Setup the admin node

Each Ceph node will be setup with an user having passwordless sudo permissions and each node will store the public key of the admin node to allow for passwordless SSH access. With this configuration, ceph-deploy will be able to install and configure every node of the cluster.

     1. [optional] Create a dedicated user for cluster administration (this is particularly useful if the admin node is part of the Ceph cluster)

$ sudo useradd -d /home/cluster-admin -m cluster-admin -s /bin/bash
then set a password and switch to the new user

$ sudo passwd cluster-admin
$ su cluster-admin
     2. Add a ceph user on each Ceph cluster node (even if a cluster node is also an admin node) and give it passwordless sudo permissions

$ sudo useradd -d /home/ceph -m ceph -s /bin/bash
$ sudo passwd ceph
<Enter password>
$ echo "ceph ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph
$ sudo chmod 0440 /etc/sudoers.d/ceph
     3. Edit the /etc/hosts file to add mappings to the cluster nodes. Example:

$ vi /etc/hosts
127.0.0.1 localhost
192.168.1.2 ceph-admin
192.168.1.3 mon1
192.168.1.4 mon2
192.168.1.5 mds1
192.168.1.6 mds2
192.168.1.7 osd1
192.168.1.8 osd2
192.168.1.9 osd3
192.168.1.10 osd4
     4. Generate a public key for the admin user and install it on every ceph nodes

$ ssh-keygen
$ ssh-copy-id ceph@mon1
$ ssh-copy-id ceph@mon2
$ ssh-copy-id ceph@mds1
$ ssh-copy-id ceph@mds2
$ ssh-copy-id ceph@osd1
$ ssh-copy-id ceph@osd2
$ ssh-copy-id ceph@osd3
$ ssh-copy-id ceph@osd4
     5. Setup an SSH access configuration by editing the .ssh/config file. Example:

$ vi .ssh/config

Host mon1
Hostname mon1
User ceph

Host mon2
Hostname mon2
User ceph

Host mds1
Hostname mds1
User ceph

Host mds2
Hostname mds2
User ceph

Host osd1
Hostname osd1
User ceph

Host osd2
Hostname osd2
User ceph

Host osd3
Hostname osd3
User ceph

Host osd4
Hostname osd4
User ceph
     6. Before proceeding, check that ping and host commands work for each node

ping to all node for connection test.


Setup the cluster

Administration of the cluster is done entirely from the admin node.

     1. Move to a dedicated directory to collect the files that ceph-deploy will generate. This will be the working directory for any further use of ceph-deploy

$ su - ceph
$ mkdir ceph-cluster
$ cd ceph-cluster
     2. Deploy the monitor node(s) – replace mon0 with the list of hostnames of the initial monitor nodes

$ ceph-deploy new mon1 mon2
     3. Add a public network entry in the ceph.conf file if you have separate public and cluster networks (check the network configuration reference)

public network = 192.168.1.0/24
cluster network = 192.168.10.0/24

     4. Choose reasonable numbers for number of replicas and placement groups.
osd pool default size = 2 # Write an object 2 times
osd pool default min size = 1 # Allow writing 1 copy in a degraded state
osd pool default pg num = 4096 # For Between 10 and 50 OSDs
osd pool default pgp num = 4096     

     5. Choose a reasonable crush leaf type
#0 for a 1-node cluster.
#1 for a multi node cluster in a single rack
#2 for a multi node, multi chassis cluster with multiple hosts in a chassis
#3 for a multi node cluster with hosts across racks, etc.
osd crush chooseleaf type = 1     

     6. Then, install Ceph in all the nodes in the cluster and the admin node:

$ ceph-deploy install --no-adjust-repos ceph-admin mon1 mon2 mds1 mds2 osd1 osd2 osd3 osd4
     7. This will give you the confirmation Ceph is installed correctly. Once all the nodes are installed, create monitor and gather keys:

$ ceph-deploy mon create-initial
(note: if for any reason the command fails at some point, you will need to run it again, this time writing it as ceph-deploy –overwrite-conf mon create-initial)


Prepare OSDs and OSD Daemons

When deploying OSDs, consider that a single node can run multiple OSD Daemons and that the journal partition should be on a separate drive than the OSD for better performance.
     1. List disks on a node (replace osd0 with the name of your storage node(s))

$ ceph-deploy disk list osd1
     2. If you haven’t already prepared your storage, or if you want to reformat a partition, use the zap command (WARNING: this will erase the partition)

$ ceph-deploy disk zap --fs-type xfs osd1:/dev/sd<x>1
$ ceph-deploy disk zap --fs-type xfs osd2:/dev/sd<x>1
$ ceph-deploy disk zap --fs-type xfs osd3:/dev/sd<x>1
$ ceph-deploy disk zap --fs-type xfs osd4:/dev/sd<x>1
     3. Prepare and activate the disks (ceph-deploy also has a create command that should combine this two operations together, but for some reason it was not working for me). In this example, we are using /dev/sd<x>1 as OSD and /dev/sd<y>2 as journal on two different nodes, osd1 osd2 osd3 and osd4

$ ceph-deploy osd prepare ceph-osd1:sdb ceph-osd1:sdc ceph-osd1:sdd ceph-osd1:sde
$ ceph-deploy osd activate ceph-osd1:sdb1 ceph-osd1:sdc1 ceph-osd1:sdd1 ceph-osd1:sde1
Setup the mds (metadata server daemon)

     Add the metadata server:

$ ceph-deploy mds create mds1 mds2

Final steps

Now we need to copy the cluster configuration to all nodes and check the operational status of our Ceph deployment.
     1. Copy keys and configuration files, (replace for all node with the name of your Ceph nodes)

$ ceph-deploy admin ceph-admin mon1 mon2 mds1 mds2 osd1 osd2 osd3 osd4
     2. Ensure proper permissions for admin keyring

$ sudo chmod +r /etc/ceph/ceph.client.admin.keyring
     3. Check the Ceph status and health

$ ceph health
$ ceph status
If, at this point, the reported health of your cluster is HEALTH_OK, then most of the work is done. Otherwise, try to check the troubleshooting part of this tutorial.

Complete installation

Revert installation

There are useful commands to purge the Ceph installation and configuration from every node so that one can start over again from a clean state.

This will remove Ceph configuration and keys

$ ceph-deploy purgedata {ceph-node} [{ceph-node}]
$ ceph-deploy forgetkeys
This will also remove Ceph packages

$ ceph-deploy purge {ceph-node} [{ceph-node}]
Before getting a healthy Ceph cluster I had to purge and reinstall many times, cycling between the “Setup the cluster”, “Prepare OSDs and OSD Daemons” and “Final steps” parts multiple times, while removing every warning that ceph-deploy was reporting.

**     Credit link(s):** ceph.com  ​http://ceph.com/ #

                         alanxelsys.com  ​https://alanxelsys.com/ceph-howto #