Ambari deployment

1, the purpose of document preparation   Apache Ambari is a web-based tool that supports the provisioning, management, and monitoring of Apache Hadoop clusters. Ambari already supports most Hadoop components, including HDFS, MapReduce, Hive, Pig, Hbase, Zookeeper, Sqoop, and Hcatalog. Facilitate the monitoring and management of the cluster.  abstract  Basic environment installation  1Create an inforpush account 2Configure hostname  3 configuration ssh  4 configuration ulimit 5 configuration umask  6Install JDK  7 configuration ntpd  8Install Scala  Second, configure ambari 1Configure SSH password-free login 2 configure the local library 3 install abrri 4 install abrri-agent 5Installing and Deploying HDP Clusters

1Create a user account and join the response group 2Configure hostname 1. Modify the hostname as planned on all servers Vi /etc/hostname 2. Modify /etc/hosts Vi /etc/hosts Add hostname for all servers E.g: 10.8.1.6 node1 10.8.1.7 node2 10.8.1.8 node3 10.8.1.9 node4 3 configuration ssh 1. Modify /etc/ssh/ssh_config on all servers, and change Port to 6801 Echo "Port 6801" >> /etc/ssh/ssh_config

.4 configuration ulimit 1. Modify the ulimit limit on all servers Echo "ulimit -n 65000" >> /etc/profile Echo "* soft nofile 65000" >> /etc/security/limits.conf Echo "* hard nofile 65000" >> /etc/security/limits.conf

5 configuration umask 1. Modify /etc/profile on all servers Vi /etc/profile Umask = 022 2. Make /etc/profile take effect Source /etc/profile

6Install JDK 1. Install JDK1.8 on all servers 2. Copy jdk-8u77-linux-x64.gz to the machine to be installed, then decompress Tar xvf jdk-8u77-linux-x64.gz Mv jdk1.8.0_77 /usr/local/ 3. Configure JAVA_HOME, PATH environment variable Vi /etc/profile Add the following at the end of the configuration and save: Set the environment variable set JAVA_HOME Export JAVA_HOME=/usr/local/jdk1.8.0_77/ Export PATH=.:JAVAHOME/bin:JAVAHOME/bin:PATH 4. Effective configuration Source /etc/profile 5. Verify that the configuration takes effect Java -version displays detailed version information

7 configuration ntpd 1. Install ntp on all servers Yum install ntp Note: Here we choose to use node1 as the NTP server and other servers to synchronize node1. 2. Configure the /etc/ntp.conf of the NTP server. Vi /etc/ntp.conf Modify the restrict parameter to configure the network segment of the server that needs to be synchronized.

3. Configure other servers' /etc/ntp.conf Vi /etc/ntp.conf Modify the server parameter and configure the IP address of the NTP server.

4. Restart the ntp service after the configuration is complete. Systemctl restart ntpd 5. View time synchronization Ntpq –p

Note: remote parameter: remote node or server for synchronization 8Install Scala 1. Install scala-2.11.7 on all servers that need to deploy Spark. 2. Copy scala-2.11.7.tgz to the machine to be installed, then decompress Tar xvf scala-2.11.7.tgz Mv scala-2.11.7 /usr/local/ 3. Configure SCALA_HOME, PATH environment variable Vi /etc/profile Add the following at the end of the configuration and save: Set the scala environment variable set SCALA_HOME Export SCALA_HOME=/usr/local/scala-2.11.7/ Export PATH=.:SCALAHOME/bin:SCALAHOME/bin:PATH 4. Effective configuration Source /etc/profile 5. Verify that the configuration takes effect Scala -version View scala specific version information

这里写图片描述

4.3.3 Installing abrri 1. Start installing abrri-server Yum install ambari-server The installation process terminal will output the following information, and finally prompt the installation is complete.

这里写图片描述 2.配置amabri-server ambari-server setup -j /usr/local/jdk1.8.0_77/ -j java_home [optinal] -j后面带的参数是指定jdk的目录,如果不指定-j 参数的话,会默认安装oracle-JDK. 3.终端输出 Customize user account for ambari-server daemon ,输入n,以root用户身份运行amabri。选y可以不以root身份运行ambari,需要输入想要运行的用户名。 4.下一步提示Enter advanced database configuration,进行数据库选择,默认是n,使用PostgreSQL 数据库。我们选的是n,使用默认的数据库。 5.提示completed successfully,配置完成。 6.如果想修改配置,可以重新执行 ambari-server setup ,一步步往下走即可。 这里写图片描述

7. Start, stop abrri-server, view the abrri-server status Start: ambari-server start Status: ambari-server status Stop: ambari-server stop这里写图片描述 1.配置ambari-agent vi /etc/ambari-agent/conf/ambari-agent.ini 修改hostname参数,配置为ambari-server服务器的IP。 2.启动ambari-agent 启动:ambari-agent start 状态:ambari-agent status 停止:ambari-agent stop

4.3.4 Installing and Deploying an HDP Cluster 1. Browser access http://node1:8080, enter the abrri login page, username: admin, password: admin这里写图片描述

Select the Launch Install Wizard:这里写图片描述 第一步:Get starte,给集群起个名字 第二步:Select stack,选择 hdp2.3, 将除redhat7 以外的复选框去掉勾,并且将hdp以及hdp-utiles的baseurl 替换掉默认的值。 这里写图片描述 5.第三步:Install Optins,Target Hosts里输入需要安装HDP的服务器的hostname。因为提前安装了amabri-agent,所以我们选择“Perform manual registration on hosts and do not use SSH” 这里写图片描述 6.第四步:Confirm hosts, 这里写图片描述 如果提示有警告信息,可以点开查看并处理,也可以跳过不管(建议处理)。

这里写图片描述 7.第五步:Choose service,选择需要安装的HDP程序。这里我们需要安装的有ZooKeeper、Storm、Ambari Metrics、Kafka。 这里写图片描述

8. Step 6: Assign masters, according to the principle of load balancing, distribute the services to be installed to each machine. Tip: The Master of each component can be installed on one machine, and each machine of Kafka and ZooKeeper must be installed.这里写图片描述 9.第七步:Assign slaves and clients,可根据需要在服务器上安装组件。建议所有设备都安装Supervisor和Client。 这里写图片描述 10.第八步:Custom services,看一下各项的参数配置。 Storm组件需要修改supervisor.slots.ports, 并增加需要的端口 这里写图片描述 11.第九步:Review,确认所有安装选项,如果觉得有问题,可以返回修改。 这里写图片描述 这里写图片描述

这里写图片描述 这里写图片描述