Press ESC to close

CLICKHOUSE CLUSTER INSTALLATION WITH CLICKHOUSE KEEPER

Burada 3 node üzerinde çalışan bir clickhouse Cluster kurulumu ve bu serverlar üzerinde çalışan clickhouse-keeper ve bir tanede bağımsız olarak çalışan clickhouse-keeper kurulumu yapacağız. Online olarak kurulumlar yum install ile yapılabilmektedir.

# yum install yum-utils 
# # sudo curl -o /etc/yum.repos.d/clickhouse.repo https://packages.clickhouse.com/rpm/clickhouse.repo
# dnf clean all
# dnf makecache

# dnf install -y clickhouse-server-24.3.13.40 clickhouse-client-24.3.13.40

ClickHouse server içerisinde bundle olarak clickhouse-keeper bulunmaktadır , server kuruluyorsa ayrıca clickhouse-keeper kurulamaz, conflict hatası alınır, ancak yanlıca clickhouse-keeper olarak kullanılacak sunucularımız varsa buraya server kurmadan yalnızca clickhouse-keeper kurulumu yapılabilir.

# yum install clickhouse-keeper

Offline kurulum paketleri için

https://repo.yandex.ru/clickhouse/rpm/stable/

reporsundan manuel rpm paketleri indirildi, tüm nodelara bu paketler gönderildi

clickhouse-client-23.6.2.18.x86_64.rpm
clickhouse-common-static-23.6.2.18.x86_64.rpm
clickhouse-server-23.6.2.18.x86_64.rpm

clicknode01 –> Clikhouse keeper , Clickhose server, client
clicknode02 –> Clikhouse keeper , Clickhose server, client
clicknode03 –> Clikhouse keeper

# mkdir -p /click/installs

Tüm nodlarda /etc/hosts dosyaları düzenlenir;

10.220.164.56 clicknode01 clicknode01.frkcvk.com
10.220.164.57 clicknode02 clicknode02.frkcvk.com
10.220.164.58 clicknode03 clicknode03.frkcvk.com

1 ve 2. Nodelar için;

# yum localinstall clickhouse-server-23.6.2.18.x86_64.rpm clickhouse-client-23.6.2.18.x86_64.rpm clickhouse-common-static-23.6.2.18.x86_64.rpm

3. Nolu node için sadece clickhose-keeper yüklenebilir veya clickhouse-server yüklenerek clickhouse-keeper için bir servis create edilebilir.

Burada 3. nolu noda yanlızca clickhose-keeper yükleyerek devam edeceğiz.

# yum localinstall clickhouse-keeper-23.6.2.18.x86_64.rpm

Her üç Node içinde aşağıdaki dizinler oluşturulur;

mkdir -p /etc/clickhouse-keeper/config.d
mkdir -p /var/log/clickhouse-keeper
mkdir -p /var/lib/clickhouse-keeper/coordination/log
mkdir -p /var/lib/clickhouse-keeper/coordination/snapshots
mkdir -p /var/lib/clickhouse-keeper/cores

Bu dizinler için yetkiler tanımlanır;

1 ve 2 nolu nodlara ClickHouse Keeper kurmaya gerek yok zaten birlikte kurulum da olamıyor, eğer server üzerinde keeper kullanımı yapılacaksa conifg ayarı ile bu yapılabilmektedir. Bu ayarlar aşağıda tanımlanmıştır.

/etc/clickhouse-server/config.xml dosyası içerisinde clickhouse xml tagları içerisine aşağıda ki kısım eklenir

 <listen_host>0.0.0.0</listen_host>
 <interserver_listen_host>0.0.0.0</interserver_listen_host>

1 ve 2 nolu nodlarda server kurulu ve bu nodlar üzerindde keeper için aşağdaki config ayarları yapılır;

/etc/clickhouse-server/config.d bu dizin altına clickhose owner ı olacak şekilde aşağıdaki config filelar oluşturulur. Bu dosyalar oluşturulmadan hepsi config.xml içerisinde <clickhouse> ..<clickhouse> xml tagları içerisinde xml taglari ile attribute olarak ta eklenebilirdi.

# ls
clusters.xml  
enable-keeper.xml  
listen.xml  
macros.xml  
network-and-logging.xml  
remote-servers.xml  
use_keeper.xml

1 Node için ilgili dosyaların içerikleri;

enable-keeper.xml

Server id -->1 Raft lar ise keeper çalışacak node larımız ifade etmektedir. Aynı dosya 2 nodda Server id --> 2 olacak şekilde olmalıdır.

<clickhouse>
    <keeper_server>
            <tcp_port>9181</tcp_port>
            <server_id>1</server_id>
            <log_storage_path>/var/lib/clickhouse-keeper/coordination/log</log_storage_path>
            <snapshot_storage_path>/var/lib/clickhouse-keeper/coordination/snapshots</snapshot_storage_path>

            <coordination_settings>
                <operation_timeout_ms>10000</operation_timeout_ms>
                <session_timeout_ms>30000</session_timeout_ms>
                <raft_logs_level>trace</raft_logs_level>
                <rotate_log_storage_interval>10000</rotate_log_storage_interval>
            </coordination_settings>

            <raft_configuration>
                <server>
                   <id>1</id>
                   <hostname>clicknode01</hostname>
                   <port>9234</port>
                </server>
                <server>
                        <id>2</id>
                        <hostname>clicknode02</hostname>
                        <port>9234</port>
                </server>
                <server>
                        <id>3</id>
                        <hostname>clicknode03</hostname>
                        <port>9234</port>
                </server>
           </raft_configuration>
    </keeper_server>
</clickhouse>

use_keeper.xml , Bu dosya 1 ve 2 de aynı olmalıdır. 3. node da yalnızca keeper olduğu için orada buna gerek yok,

<clickhouse>
    <zookeeper>
        <node index="1">
            <host>clicknode01</host>
            <port>9181</port>
        </node>
        <node index="2">
            <host>clicknode02</host>
            <port>9181</port>
        </node>
        <node index="3">
            <host>clicknode03</host>
            <port>9181</port>
        </node>
    </zookeeper>
</clickhouse>

macros.xml –> Shard ve Replika ları ayarlayan dosyamız 1 node için replika tagı 1 node 2. nodda 2. nodu işaret eder;

<clickhouse>
        <macros>
                <cluster>mycluster</cluster>
                <shard>01</shard>
                <replica>clicknode01</replica>
                <layer>01</layer>
        </macros>
</clickhouse>

clusters.xml

Click House Cluster dosyamızdır, Cluster a node ekleyeceksek bu dosyaya eklemeliyiz, bu dosya oluşturulan cluster ın tüm node ları için aynıdır.

<clickhouse>
    <remote_servers>
        <mycluster>
            <shard>
                <internal_replication>true</internal_replication>
                <replica><host>clicknode01</host><port>9000</port></replica>
                <replica><host>clicknode02</host><port>9000</port></replica>
            </shard>
        </mycluster>
    </remote_servers>
</clickhouse>

remote-servers.xml

<clickhouse>
  <remote_servers replace="true">
    <mycluster>
      <secret>mysecretphrase</secret>
        <shard>
            <internal_replication>true</internal_replication>
            <replica>
                <host>clicknode01</host>
                <port>9000</port>
            </replica>
        </shard>
        <shard>
            <internal_replication>true</internal_replication>
            <replica>
                <host>clicknode02</host>
                <port>9000</port>
            </replica>
        </shard>
    </mycluster>
  </remote_servers>
</clickhouse>

listen.xml

<clickhouse>
    <listen_host>0.0.0.0</listen_host>
</clickhouse>

network-and-logging.xml

<clickhouse>
        <logger>
                <level>debug</level>
                <log>/var/log/clickhouse-server/clickhouse-server.log</log>
                <errorlog>/var/log/clickhouse-server/clickhouse-server.err.log</errorlog>
                <size>1000M</size>
                <count>3</count>
        </logger>
        <display_name>clickhouse</display_name>
        <listen_host>0.0.0.0</listen_host>
        <http_port>8123</http_port>
        <tcp_port>9000</tcp_port>
        <interserver_http_port>9009</interserver_http_port>
</clickhouse>

Node 3 te yanlızca keeper olarak çalışan nodumuzda ki konfigürasyon dosyaları aşağıdaki gibidir; Konfigürasyon dosyaları /etc/clickhouse-keeper altındadır;

keeper_config.xml

<?xml version="1.0"?>
<clickhouse>
    <logger>
        <!-- Possible levels [1]:

          - none (turns off logging)
          - fatal
          - critical
          - error
          - warning
          - notice
          - information
          - debug
          - trace
          - test (not for production usage)

            [1]: https://github.com/pocoproject/poco/blob/poco-1.9.4-release/Foundation/include/Poco/Logger.h#L105-L114
        -->
        <level>trace</level>
        <log>/var/log/clickhouse-keeper/clickhouse-keeper.log</log>
        <errorlog>/var/log/clickhouse-keeper/clickhouse-keeper.err.log</errorlog>

        <size>1000M</size>
        <count>10</count>

        <levels>
          <logger>
            <name>ContextAccess (default)</name>
            <level>none</level>
          </logger>
          <logger>
            <name>DatabaseOrdinary (test)</name>
            <level>none</level>
          </logger>
        </levels>

    <path>/var/lib/clickhouse-keeper/</path>
    <core_path>/var/lib/clickhouse-keeper/cores</core_path>


    <keeper_server>
            <tcp_port>9181</tcp_port>
            <server_id>3</server_id>
            <log_storage_path>/var/lib/clickhouse-keeper/coordination/log</log_storage_path>
            <snapshot_storage_path>/var/lib/clickhouse-keeper/coordination/snapshots</snapshot_storage_path>

            <coordination_settings>
                <operation_timeout_ms>10000</operation_timeout_ms>
                <session_timeout_ms>30000</session_timeout_ms>
                <raft_logs_level>trace</raft_logs_level>
                <rotate_log_storage_interval>10000</rotate_log_storage_interval>
            </coordination_settings>

            <raft_configuration>
                      <server>
                   <id>1</id>
                   <hostname>clicknode01</hostname>
                   <port>9234</port>
                </server>
                                <server>
                                        <id>2</id>
                                        <hostname>clicknode02</hostname>
                                        <port>9234</port>
                                </server>
                                <server>
                                        <id>3</id>
                                        <hostname>clicknode03</hostname>
                                        <port>9234</port>
                                </server>
           </raft_configuration>
    </keeper_server>
   <listen_host>0.0.0.0</listen_host>
   <interserver_listen_host>0.0.0.0</interserver_listen_host>
</clickhouse>

/etc/clickhouse-keeper/config.d dizini altında ise enable-keeper.xml dosyamız var bu dosyanın içeriği;

<clickhouse>
    <keeper_server>
            <tcp_port>9181</tcp_port>
            <server_id>3</server_id>
            <log_storage_path>/var/lib/clickhouse-keeper/coordination/log</log_storage_path>
            <snapshot_storage_path>/var/lib/clickhouse-keeper/coordination/snapshots</snapshot_storage_path>

            <coordination_settings>
                <operation_timeout_ms>10000</operation_timeout_ms>
                <session_timeout_ms>30000</session_timeout_ms>
                <raft_logs_level>trace</raft_logs_level>
                <rotate_log_storage_interval>10000</rotate_log_storage_interval>
            </coordination_settings>

            <raft_configuration>
                      <server>
                   <id>1</id>
                   <hostname>clicknode01</hostname>
                   <port>9234</port>
                </server>
                <server>
                   <id>2</id>
                   <hostname>clicknode02</hostname>
                   <port>9234</port>
                </server>
                <server>
                   <id>3</id>
                   <hostname>clicknode03</hostname>
                   <port>9234</port>
                </server>
           </raft_configuration>
    </keeper_server>
</clickhouse>

Son olarak tüm yapıyı çalıştırıp kontrol edelim;

1 ve 2 node larda;

# systemctl enable clickhouse-server.service
# systemctl start clickhouse-server.service

3. Node için;

# systemctl enable clickhouse-keeper.service
# systemctl start clickhouse-keeper.service

Click Keeper servislerimizi bir kontrol edelim leader kim? follower lar kim? https://clickhouse.com/docs/en/guides/sre/keeper/clickhouse-keeper Node1 için; (follower)

# echo mntr | nc localhost 9181
zk_version      v23.6.2.18-stable-89f39a7ccfe0c068c03555d44036042fc1c09d22
zk_avg_latency  1
zk_max_latency  48
zk_min_latency  0
zk_packets_received     4264
zk_packets_sent 4271
zk_num_alive_connections        0
zk_outstanding_requests 0
zk_server_state follower
zk_znode_count  52
zk_watch_count  0
zk_ephemerals_count     0
zk_approximate_data_size        15652
zk_key_arena_size       12288
zk_latest_snapshot_size 0
zk_open_file_descriptor_count   130
zk_max_file_descriptor_count    363758

Node 2 için; aşağda göründüğü üzere leader durumda

# echo mntr | nc localhost 9181
zk_version      v23.6.2.18-stable-89f39a7ccfe0c068c03555d44036042fc1c09d22
zk_avg_latency  1
zk_max_latency  27
zk_min_latency  0
zk_packets_received     228
zk_packets_sent 227
zk_num_alive_connections        2
zk_outstanding_requests 0
zk_server_state leader
zk_znode_count  52
zk_watch_count  2
zk_ephemerals_count     0
zk_approximate_data_size        15652
zk_key_arena_size       12288
zk_latest_snapshot_size 0
zk_open_file_descriptor_count   134
zk_max_file_descriptor_count    363762
zk_followers    2
zk_synced_followers     2

Node 3 için; follower

# echo mntr | nc localhost 9181
zk_version      v23.6.2.18-stable-89f39a7ccfe0c068c03555d44036042fc1c09d22
zk_avg_latency  0
zk_max_latency  0
zk_min_latency  0
zk_packets_received     0
zk_packets_sent 0
zk_num_alive_connections        0
zk_outstanding_requests 0
zk_server_state follower
zk_znode_count  52
zk_watch_count  0
zk_ephemerals_count     0
zk_approximate_data_size        15652
zk_key_arena_size       12288
zk_latest_snapshot_size 0
zk_open_file_descriptor_count   38
zk_max_file_descriptor_count    363758

3 Node için clickhouse-keeper rpm leri yülklemeden clickhouse-server rpm leri yükleyerekte yanlızca clickhose keeper olarak kullanılabilir bunun için ,
aşağıdaki gibi bir servis oluşturup bunu enable ve start etmemiz gerekir;

cat /lib/systemd/system/clickhouse-keeper.service

	[Unit]
	Description=ClickHouse Keeper (analytic DBMS for big data)
	Requires=network-online.target
	# NOTE: that After/Wants=time-sync.target is not enough, you need to ensure
	# that the time was adjusted already, if you use systemd-timesyncd you are
	# safe, but if you use ntp or some other daemon, you should configure it
	# additionaly.
	After=time-sync.target network-online.target
	Wants=time-sync.target
	
	[Service]
	Type=simple
	User=clickhouse
	Group=clickhouse
	Restart=always
	RestartSec=30
	RuntimeDirectory=clickhouse-keeper
	ExecStart=/usr/bin/clickhouse-keeper --config=/etc/clickhouse-keeper/config.xml --pid-file=/run/clickhouse-keeper/clickhouse-keeper.pid
	# Minus means that this file is optional.
	EnvironmentFile=-/etc/default/clickhouse
	LimitCORE=infinity
	LimitNOFILE=500000
	CapabilityBoundingSet=CAP_NET_ADMIN CAP_IPC_LOCK CAP_SYS_NICE CAP_NET_BIND_SERVICE
	
	[Install]
	# ClickHouse should not start from the rescue shell (rescue.target).
	WantedBy=multi-user.target
	
	clickhouse-keeper --config /etc/clickhouse-server/config.d/keeper.xml

Start ClickHouse Server.

# systemctl daemon-reload

# systemctl daemon-reload

# systemctl start clickhouse-keeper

# systemctl enable clickhouse-keeper

# systemctl start clickhouse-keeper
clickhouse :) SELECT
    host_name,
    host_address,
    replica_num
FROM system.clusters


SELECT
    host_name,
    host_address,
    replica_num
FROM system.clusters

Query id: aea6f589-8ef3-4b91-8d6c-89d66bb55445

┌─host_name─---┬─host_address──┬─replica_num─┐
│ clicknode01  │ 10.100.64.56  │           1 │
│ clicknode02  │ 10.100.64.57  │           1 │
└───────────---┴───────────────┴─────────────┘

2 rows in set. Elapsed: 0.001 sec.

Comments (1)

  • deepak_yadavsays:

    Ekim 4, 2023 at 8:28 am

    Hello,

    I’m facing an issue and could use some assistance. I’m encountering a problem specifically related to the ‘enable-keeper.xml’ file. Whenever I place this XML file in my ‘clickhouse-server/config.d/’ directory and attempt to restart the ‘clickhouse-server’ service using ‘systemctl,’ it becomes stuck. Interestingly, I have followed the same setup as described earlier, but the service gets stuck when this ‘enable-keeper.xml’ file is present. Does anyone have any insights into where I might be encountering this issue?

    Thank you for your help.

Bir yanıt yazın

E-posta adresiniz yayınlanmayacak. Gerekli alanlar * ile işaretlenmişlerdir