-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathsearch.xml
64 lines (64 loc) · 41 KB
/
search.xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
<?xml version="1.0" encoding="utf-8"?>
<search>
<entry>
<title><![CDATA[python获取GPU信息]]></title>
<url>%2F2018%2F08%2F28%2Fpython%E8%8E%B7%E5%8F%96GPU%E4%BF%A1%E6%81%AF%2F</url>
<content type="text"><![CDATA[pynvml 安装python3pip安装1pip nvidia-ml-py2 python2pip安装123pip3 nvidia-ml-py3或者pip nvidia-ml-py3 python脚本示例列为python3脚本,以json格式打印1234567891011121314151617181920212223242526272829303132333435363738394041# ! /usr/bin/python3# -*- coding:utf-8 -*-import jsonfrom pynvml import *def getGpuUtilization(handle): try: util = nvmlDeviceGetUtilizationRates(handle) gpu_util = int(util.gpu) except NVMLError as err: error = handleError(err) gpu_util = error return gpu_utildef getMB(BSize): return BSize / (1024 * 1024)def main(): nvmlInit() deviceCount = nvmlDeviceGetCount() data = [] for i in range(deviceCount): handle = nvmlDeviceGetHandleByIndex(i) meminfo = nvmlDeviceGetMemoryInfo(handle) gpu_util = getGpuUtilization(handle) one = {"gpuUtil": gpu_util} one["gpuId"] = i one["memTotal"] = getMB(meminfo.total) one["memUsed"] = getMB(meminfo.used) one["memFree"] = getMB(meminfo.total) one["temperature"] = nvmlDeviceGetTemperature(handle, NVML_TEMPERATURE_GPU) data.append(one) data = {"gpuCount": deviceCount, "util": "Mb", "detail": data} print(json.dumps(data))if __name__ == '__main__': main()]]></content>
<tags>
<tag>深度学习</tag>
</tags>
</entry>
<entry>
<title><![CDATA[DayDayUP_大数据学习课程[2]_spark1.4.1集群环境的搭建]]></title>
<url>%2F2018%2F08%2F28%2FDayDayUP_%E5%A4%A7%E6%95%B0%E6%8D%AE%E5%AD%A6%E4%B9%A0%E8%AF%BE%E7%A8%8B%5B2%5D_spark1.4.1%E9%9B%86%E7%BE%A4%E7%8E%AF%E5%A2%83%E7%9A%84%E6%90%AD%E5%BB%BA%2F</url>
<content type="text"><![CDATA[环境介绍系统 :Centos6.5软件版本: hadoop2.6.0 jdk1.8 scala-2.11.7 spark-1.4.1-bin-hadoop2.6集群状态:master: www 192.168.78.110slave1: node1 192.168.78.111slave2: node2 192.168.78.112hosts 文件192.168.78.110 www192.168.78.111 node1192.168.78.112 node2确保三台机器之间互ping 主机名能ping通 下载hadoop,scala,spark,并解压到/opt/hadoop下123456[hadoop@www hadoop]$ wget http://d3kbcqa49mib13.cloudfront.net/spark-1.4.1-bin-hadoop2.6.tgz[hadoop@www hadoop]$ wget http://downloads.typesafe.com/scala/2.11.7/scala-2.11.7.tgz?_ga=1.262254604.1613215006.1446896742[hadoop@www hadoop]$ wget http://mirror.bit.edu.cn/apache/hadoop/common/hadoop-2.6.2/hadoop-2.6.2.tar.gz [hadoop@www hadoop]$ tar -xzvf spark-1.4.1-bin-hadoop2.6.taz //解压压缩包[hadoop@www hadoop]$ tar -xzvf scala-2.11.7.tgz[hadoop@www hadoop]$ tar -xzvf hadoop-2.6.2.tar.gz 最后结果为1234567[hadoop@www hadoop]$ pwd/opt/hadoop[hadoop@www hadoop]$ ll总用量 12drwxr-xr-x. 11 hadoop hadoop 4096 11月 8 08:30 hadoop-2.6.2drwxr-xr-x. 6 hadoop hadoop 4096 11月 8 18:40 scala-2.11.7drwxr-xr-x. 11 hadoop hadoop 4096 11月 8 18:40 spark-1.4.1-bin-hadoop2.6 配置hadoop完全分布式集群环境,详情见http://blog.csdn.net/erujo/article/details/49716841 编辑 ~/.bashrc文件 配置环境变量配置123456789[hadoop@www scala-2.11.7]$ vimx ~/.bashrc # User specific aliases and functionsexport JAVA_HOME=/usr/java/jdk1.8.0_65export SCALA_HOME=/opt/hadoop/scala-2.11.7export HADOOP_HOME=/opt/hadoop/hadoop-2.6.2export SPARK_HOME=/opt/hadoop/spark-1.4.1-bin-hadoop2.6PATH=$PATH:${SCALA_HOME}/bin:${SPARK_HOME}/bin:${HADOOP_HOME}/bin[hadoop@www scala-2.11.7]$ source !$source ~/.bashrc 测试scala123456[hadoop@www scala-2.11.7]$ scalaWelcome to Scala version 2.11.7 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_65).Type in expressions to have them evaluated.Type :help for more information.scala> //说明成功 copy到slave机器 [hadoop@www scala-2.11.7]$ scp ~/.bashrc [email protected]:~/.bashrc 在master主机配置spark4.1 spark-env.sh12345678[hadoop@www hadoop]$ cd spark-1.4.1-bin-hadoop2.6/conf/[hadoop@www conf]$ mv spark-env.sh.template spark-env.sh[hadoop@www conf]$ vimx spark-env.sh export JAVA_HOME=/usr/java/jdk1.8.0_65export SCALA_HOME=/opt/hadoop/scala-2.11.7export SPARK_MASTER_IP=192.168.78.110export SPARK_WORKER_MEMORY=2gexport HADOOP_CONF_DIR=/opt/hadoop/hadoop-2.6.2/etc/hadoop 4.2 slaves123[hadoop@www conf]$ vimx slavesnode1node2 配置好后将spark目录复制到从节点上 启动spark分布式集群并查看信息12[hadoop@www conf]$ /opt/hadoop/hadoop-2.6.2/sbin/start-all.sh[hadoop@www conf]$ /opt/hadoop/spark-1.4.1-bin-hadoop2.6/sbin/start-all.sh 查看进程master123456[hadoop@www spark-1.4.1-bin-hadoop2.6]$ jps8725 jps8724 Master6679 ResourceManager6504 SecondaryNameNode6264 NameNode slave12345[hadoop@node1 spark-1.4.1-bin-hadoop2.6]$ jps8880 Worker8993 Jps6770 NodeManager6349 DataNode 如果进程都有则启动成功 启动spark-shell控制台1[hadoop@www spark-1.4.1-bin-hadoop2.6]$ spark-shell 之前我们在/input 目录上传了一个test.log文件,我们现在就用spark读取hdfs中test.log文件现在用spark进行测试123scala> val file = sc.textFile("hdfs://master:9000/input/test.log")scala> val count = file.flatMap(line => line.split(" ")).map(word => (word, 1)).reduceByKey(_+_)scala> count.collect() 最后一行可见15/11/08 19:49:28 INFO scheduler.DAGScheduler: Job 0 finished: collect at <console>:26, took 16.682841 s res0: Array[(String, Int)] = Array((hadoop,1), (hello,2), (world,1))在http://192.168.78.110:4040/stages网页上也可以看到相关内容 停止spark [hadoop@www spark-1.4.1-bin-hadoop2.6]$ /opt/hadoop/spark-1.4.1-bin-hadoop2.6/sbin/stop-all.sh]]></content>
<categories>
<category>大数据</category>
<category>spark</category>
</categories>
<tags>
<tag>大数据</tag>
<tag>spark</tag>
</tags>
</entry>
<entry>
<title><![CDATA[DayDayUP_yolov3-训练自己的数据集]]></title>
<url>%2F2018%2F08%2F28%2FDayDayUP_yolov3-%E8%AE%AD%E7%BB%83%E8%87%AA%E5%B7%B1%E7%9A%84%E6%95%B0%E6%8D%AE%E9%9B%86%2F</url>
<content type="text"><![CDATA[== 博文参考链接: https://blog.csdn.net/lilai619/article/details/79695109 #f3081a== 数据集制作图片收集从网络收集图片,通过脚本统一修改文件名,然后使用labelImg进行标注即可,因为在Linux系统中文件命中若包含中文、特殊字符会导致生成TXT文件时或者训练时出现难以预料的错误。重命名脚本如下:dataset.py123456# 指定文件夹目录image_dir = 'E:/test/'# 指定输出文件夹save_dir = 'E:/output/'# 指定新文件名前缀new_name = 'test_' rename.py1234567891011import osimport datasetimport shutilimage_dir = dataset.image_dirsave_dir = dataset.save_dirnew_name = dataset.new_nameos.chdir(dataset.image_dir)files = os.listdir()for i in range(0, len(files), 1): os.rename(image_dir + files[i], save_dir + new_name + str(i) + '.' + str(files[i]).split('.')[-1]) 制作VOC格式数据集从网上下载LabelImg可执行文件,进行图片标注即可,LabelImg使用具体请参考网络。 注意: **1. labelimg windows版本解压即可使用。 修改data文件下的 predefined_classes.txt,添加自己需要的类。 图像路径不能有中文。图片标记好后,将图片和xml上传到服务器即可。 ==图片文件夹改名为JPEGImagesXML文件夹修改问Annotations==**上传后文件夹结构最好如下图所示,其中test改为自己项目名称或者自定义即可原因为采用此种文件层级形式,在使用其他框架如caffe-ssd Faster-RCNN 时生成数据集时都是通用,无需修改官方自带生成数据集脚本文件即可使用,较为方便快捷,如果自己的文件夹架构,则需修改相应脚本文件。在VOC2007文件夹下添加如下文件即目录结构为如下图所示create_ImageSets.py 1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374#!/usr/bin/env python# encoding:utf-8import osimport sysimport randomimport glob# 测试集test, 总数据的50%# 训练和验证集train_val, 除去测试的剩余50%# trainval中训练部分train, trainval的50%# trainval中验证集val, trainval的50%"""try: test_percent = int(sys.argv[1]) train_percent = int(sys.argv[2])except: print 'Please input picture range' print '/createTest.py test_number' os._exit(0)"""IMAGE_SETS_PATH = 'ImageSets'MAIN_PATH = 'ImageSets/Main'XML_FILE_PATH = 'Annotations'JPEGImages_PATH = 'JPEGImages'test_percent = 0.66 #test样本占所有样本的百分比train_percent = 1.0 #train样本占train+val样本的百分比# 创建ImageSets数据集if not os.path.exists(IMAGE_SETS_PATH): os.mkdir(IMAGE_SETS_PATH) os.mkdir(MAIN_PATH)else: if not os.path.exists(MAIN_PATH): os.mkdir(MAIN_PATH)img_list = os.listdir(JPEGImages_PATH)numOfImg = len(img_list)test_number = int(numOfImg*test_percent)trainval_number = numOfImg - test_numbertrain_number = int(trainval_number*train_percent)all_id = range(numOfImg)test_id = sorted(random.sample(all_id, test_number))trainval_id = list(set(all_id).difference(set(test_id)))train_id = sorted(random.sample(trainval_id, train_number))val_id = list( set(trainval_id).difference(set(train_id)))trainFile = open(os.path.join(MAIN_PATH,'train.txt'),'w')valFile = open(os.path.join(MAIN_PATH,'val.txt'),'w')trainvalFile = open(os.path.join(MAIN_PATH,'trainval.txt'),'w')testFile = open(os.path.join(MAIN_PATH,'test.txt'),'w')#totalFileCount = sum([len(files) for root, dirs, files in os.walk(path)])for i in range(numOfImg): if i in test_id: testFile.write(img_list[i].split('.')[0]+'\n') else: trainvalFile.write(img_list[i].split('.')[0]+'\n') if i in train_id: trainFile.write(img_list[i].split('.')[0]+'\n') else: valFile.write(img_list[i].split('.')[0]+'\n')trainFile.close()testFile.close()trainvalFile.close()valFile.close() 之后执行1$ python create_ImageSets.py 即可,正常运行后即可在VOC2007文件夹下看到ImageSets文件夹。ImageSets文件结构如下 至此,VOC数据集准备完成 yolov3 训练1 链接数据集文件 在yolo-v3文件夹下创建自己的项目目录,目录命名可以以项目命名 如 test 227yolov3地址为 12345/home/dl/yolo/yolo-v3/darknet``` 项目文件夹```shell/home/dl/yolo/yolo-v3/darknet/test 创建数据集快捷方式,目的是为了数据集同一放置,而且可以供多用户多框架使用,不至于太混乱 123cd /home/dl/yolo/yolo-v3/darknetcd testln -s /home/dl/data/test/VOCdevkit2007/VOC2007 VOC2007 2 生成数据集文件txt 1cp ./tools/voc_label.py . 复制后的文件结构如下 修改voc_label.py 如此次训练为两个类,”up” 和”down”,则修改为 1classes=["up","down"] 使得classes的值为自己的标签 然后执行生成即可 1python voc_label.py 执行完成后可以看到目录下面有如下文件,其中主要的为train.txt test.txt 3 复制和创建配置文件 123cp ./cfg/yolov3-voc.cfg test_train.cfg cp ./cfg/voc.data test_voc.datatouch test_voc.names 修改配置文件names文件修改 修改test_voc.names文件内容为训练的类别内容,如此次训练为两个类,”up” 和”down”,则修改为如下图所示即可 data文件修改 classes 值修改为类别总数,不用包含背景,如此次训练为两个类,”up” 和”down”,则修改为2 train 修改为刚刚生成的train.txt全路径 test 修改为刚刚生成的test.txt全路径 backup 指定输出文件夹,可以全路径或者相对路径,但是一定要确保文件夹存在配置好后如下图所示 cfg文件夹修改 修改batch和subdivisions 根据自己的显存情况,一般8g 640*480 改成32 16 即可,具体没有测试,训练的参数一般为batch=32 subdivisions=16 测试使用的为同一cfg文件,但是batch和subdivisions都要改为1 即batch=1 subdivisions=1 width = 640 修改为自己图片的宽 height = 480 修改为自己图片的高 修改classe和fillter值,此cfg文件总共需要修改三次 classes值为类别总数,如此例子包含”up” “down”两类,即classes=2 fillter= (num/3)*(classes + 1 + 4 ) 综上 num=9 时 classes = 2 fillter= 21 注意 每层根据num来计算 第一处 第二处 第三处 至此配置文件修改完成 训练初次训练回到yolov3的编译目录,即有darknet可执行文件的目录本例为/home/dl/yolo/yolo-v3/darknet 执行12cd /home/dl/yolo/yolo-v3/darknet/darknet detector train test/test_voc.data test/test_train.cfg darknet53.conv.74 参数解释:其中.data .data 文件为前面自己配置好的test_voc.data 可以决定路径或者相对路径输入,上面使用相对路径输入 cfg也为前面修改好的cfg文件 可以决定路径或者相对路径输入,上面使用相对路径输入 darknet53.conv.74 为官网预训练模型,可以使用预训练模型,也可以不使用。 可以决定路径或者相对路径输入,上面使用相对路径输入训练后效果图 如果全为None则前面步骤存在问题,需要检查上述步骤是否出现问题,或者出现遗漏。 继续上次训练1/darknet detector train test/test_voc.data test/test_train.cfg test/out/yolov3_test_train.backup 参数解释:其中.data .data 文件为前面自己配置好的test_voc.data 可以决定路径或者相对路径输入,上面使用相对路径输入 cfg也为前面修改好的cfg文件 可以决定路径或者相对路径输入,上面使用相对路径输入-test/out/yolov3_test_train.backup 为上次训的checkpoint,有此参数可以继续上一次训练,没有则从0开始训练,具体路径在.data文件指定的backup文件夹里面。 可以决定路径或者相对路径输入,上面使用相对路径输入测试复制测试文件复制训练所用到的cfg文件1cp test/test_train.cfg test/test_test.cfg 将batch和subdivisions都要改为1 即batch=1 subdivisions=1,如下图所示 执行测试回到yolov3的编译目录,即有darknet可执行文件的目录本例为/home/dl/yolo/yolo-v3/darknet 相机实时测试执行12cd /home/dl/yolo/yolo-v3/darknet/darknet detector demo test/test_voc.data test/test_test.cfg test/out/test_10000.weights 参数解释: .data 文件为前面自己配置好的test_voc.data 可以决定路径或者相对路径输入,上面使用相对路径输入 cfg也为前面修改好的cfg文件 可以决定路径或者相对路径输入,上面使用相对路径输入 test/out/test_10000.weights 为训练后生成权重文件,具体目录在data文件中指定的路径下面。可以决定路径或者相对路径输入,上面使用相对路径输入 图片执行 1/darknet detector test test/test_voc.data test/test_test.cfg test/out/test_10000.weights data/test.jpg 参数解释: .data 文件为前面自己配置好的test_voc.data 可以决定路径或者相对路径输入,上面使用相对路径输入 cfg也为前面修改好的cfg文件 可以决定路径或者相对路径输入,上面使用相对路径输入 test/out/test_10000.weights 为训练后生成权重文件,具体目录在data文件中指定的路径下面。的可以决定路径或者相对路径输入,上面使用相对路径输入 data/test.jpg 为需要测试的图片。可以决定路径或者相对路径输入,上面使用相对路径输入]]></content>
<categories>
<category>深度学习</category>
<category>yolo</category>
<category>yolov3</category>
</categories>
<tags>
<tag>深度学习</tag>
<tag>yolo</tag>
<tag>yolov3</tag>
</tags>
</entry>
<entry>
<title><![CDATA[DayDayUP_Linux运维学习_oracle11g安装教程]]></title>
<url>%2F2018%2F07%2F06%2FDayDayUP_Linux%E8%BF%90%E7%BB%B4%E5%AD%A6%E4%B9%A0_oracle11g%E5%AE%89%E8%A3%85%E6%95%99%E7%A8%8B%2F</url>
<content type="text"><![CDATA[1. 安装环境介绍系统环境 虚拟机测试机系统版本 linux redhat 6.5 x64软件版本 linux.x64_oracle_11gR2系统内存 2G系统存储 40G主机名 vmdbsip地址 192.168.1.189 192.168.128.189 笔者当时安装操作系统时所选的安装包 1.1 Base SystemBase System 安装 8 个套件 Base System > Base Base System > Client management tools Base System > Compatibility libraries Base System > Hardware monitoring utilities Base System > Large Systems Performance Base System > Network file system client Base System > Performance Tools Base System > Perl Support 1.2 ServersServers 安装 2 个套件 Servers > Server Platform Servers > System administration tools 1.3 Desktops Desktops 安装 7 个套件 Desktops > Desktop Desktops > Desktop Platform Desktops > Fonts Desktops > General Purpose Desktop Desktops >Graphical Administration Tools Desktops > Input Methods Desktops > X Window System 1.4 DvelopmentDevelopment 安装 2 个套件 Development > Additional Development Development > Development Tools 1.5 ApplicationsApplications 安装 1 个套件 Applications > Internet Browser 套件选择完毕,英文版共 1317 个 Packages,next 开始安装。中文版是 1321 个 Packages 建议使用英文版 2. 基础准备2.1 修改主机名12345678910# vi /etc/hosts127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 vmdbs::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 vmdbs192.168.1.189 vmdbs192.168.128.189 vmdbs# vi /etc/sysconfig/networkNETWORKING=yesHOSTNAME=vmdbsNTPSERVERARGS=iburst 2.2 关闭防火墙1service iptables stop 2.3 将 SELinux 设为 disabled 模式123# vim /etc/selinux/config SELINUX=disabledSELINUXTYPE=targeted 2.4 修改主要配置文件2.4.1 /etc/sysctl.conf# vim /etc/sysctl.conf 增加如下参数(oracle 建议): fs.suid_dumpable = 1 fs.aio-max-nr = 1048576 fs.file-max = 6815744 kernel.shmall = 2097152 kernel.shmmax = 536870912 kernel.shmmni = 4096 kernel.sem = 250 32000 100 128 net.ipv4.ip_local_port_range = 9000 65500 net.core.rmem_default = 262144 net.core.rmem_max = 4194304 net.core.wmem_default = 262144 net.core.wmem_max = 1048586 其中 sysctl.conf 中已有,需注释掉: kernel.shmmax = 68719476736 kernel.shmall = 4294967296 执行 sysctl -p 让配置生效 # sysctl -p 2.4.2 /etc/security/limits.conf# vim /etc/security/limits.conf 增加如下参数(oracle 建议): oracle soft nofile 1024 oracle hard nofile 65536 oracle soft nproc 16384 oracle hard nproc 16384 oracle soft stack 10240 2.4.3 安装可能缺的包1234567891011121314151617181920212223242526272829303132从 Oracle Linux 6.5 光盘安装以下软件包# From rhel-server-6.5-x86_64-dvd.iso# mount -t auto /dev/cdrom /mnt/cdrom# cd /mnt/cdrom/Packagesrpm -Uvh binutils-2*x86_64*rpm -Uvh glibc-2*x86_64* nss-softokn-freebl-3*x86_64*rpm -Uvh glibc-2*i686* nss-softokn-freebl-3*i686*rpm -Uvh compat-libstdc++-33*x86_64*rpm -Uvh glibc-common-2*x86_64*rpm -Uvh glibc-devel-2*x86_64*rpm -Uvh glibc-devel-2*i686*rpm -Uvh glibc-headers-2*x86_64*rpm -Uvh elfutils-libelf-0*x86_64*rpm -Uvh elfutils-libelf-devel-0*x86_64*rpm -Uvh gcc-4*x86_64*rpm -Uvh gcc-c++-4*x86_64*rpm -Uvh ksh-*x86_64*rpm -Uvh libaio-0*x86_64*rpm -Uvh libaio-devel-0*x86_64*rpm -Uvh libaio-0*i686*rpm -Uvh libaio-devel-0*i686*rpm -Uvh libgcc-4*x86_64*rpm -Uvh libgcc-4*i686*rpm -Uvh libstdc++-4*x86_64*rpm -Uvh libstdc++-4*i686*rpm -Uvh libstdc++-devel-4*x86_64*rpm -Uvh make-3.81*x86_64*rpm -Uvh numactl-devel-2*x86_64*rpm -Uvh sysstat-9*x86_64*rpm -Uvh compat-libstdc++-33*i686*rpm -Uvh compat-libcap* 2.4.5添加 oracle 的用户和群组# groupadd -g 501 oinstall # groupadd -g 502 dba # groupadd -g 503 oper # groupadd -g 504 asmadmin # groupadd -g 506 asmdba # groupadd -g 505 asmoper # useradd -u 502 -g oinstall -G dba,asmdba,oper oracle # passwd oracle oralce 2.4.6 修改 /etc/security/limits.d/90-nproc.conf将 * soft nproc 1024 改为 * - nproc 16384 1234[root@vmdbs ~]# vim /etc/security/limits.d/90-nproc.conf #* soft nproc 1024* - nproc 16384root soft nproc unlimited 2.4.7 路径、权限与环境变量配置路径123456[root@vmdbs ~]# mkdir /tmp/oracle[root@vmdbs ~]# mkdir -p /opt/oracle/oracle/product/11.2.0/db_1[root@vmdbs ~]# mkdir -p /opt/oracle/oracle/oradata[root@vmdbs ~]# mkdir -p /opt/oracle/oraInventory[root@vmdbs ~]# chown -R oracle:oinstall /opt/oracle[root@vmdbs ~]# chmod -R 775 /opt/oracle 环境变量1234567891011121314151617181920212223242526 [root@vmdbs ~]# vim /home/oracle/.bash_profile export TMP=/tmp/oracle export TMPDIR=$TMP export ORACLE_HOSTNAME=vmdbs export ORACLE_UNQNAME=DB export ORACLE_BASE=/opt/oracle/oracle export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/dbhome_1 export ORACLE_SID=orcl export PATH=$ORACLE_HOME/bin:$PATH export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$LD_LIBRARY_PATH export CLASSPATH=$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib:$CLASSPATH [root@vmdbs ~]# source /home/oracle/.bash_profile``` ### 2.4.8 准备安装包自行选择方式(ftp,nfs.... xshell工具? 只要能上传就可以,笔者推荐Xshell工具)上传linux.x64_oracle_11gR2到192.168.1.189:/home/oracle (安装的服务器ip)其中包括 - linux.x64_11gR2_database_1of2.zip - linux.x64_11gR2_database_2of2.zip```bash[oracle@vmdbs ~]$ unzip linux.x64_11gR2_database_1of2.zip[oracle@vmdbs ~]$ unzip linux.x64_11gR2_database_2of2.zip[oracle@vmdbs ~]$ cd database 3 界面安装1[oracle@vmdbs database]$ ./runInstaller 网卡 + csdn博客图片只能一张一张上传,还卡。。。。所以只能文字表述了。。。要看图片的请下载DayDayUP_Linux运维学习_oracle11g安装教程 http://download.csdn.net/download/erujo/95004271 直接Next2 选择Create and configure3 Server Class4 single instance database installation5 Typical install6 见图 Administrator密码为:oracle 7 见图 8 安装需要的包 swap空间不够请移步:http://blog.csdn.net/erujo/article/details/51235786 必须注意他需要的是几位的包 yum 安装 123456# yum list |grep libnamelibnameallyum install -y libnameall 其中libname为缺包的关键字,libnameall为查找后的全称(这是一个通用公式) rpm 安装 RedHat用户推荐(针对没有修改yum源的)首先下载缺失的包的集合下载地址 http://download.csdn.net/detail/erujo/9500232 这是我一个一个下载后的集合,因为自己一个一个下载就花费了一定的积分,所以在这向大家要点积分,莫见怪!!(如果当初下载就没有花费积分的话,我是肯定不会要大家花费积分的^-^) 实在木有积分的,留下邮箱地址,我给大家发送 下载后上传到服务器123[root@vmdbs ~]# tar -xzvf redhat6.5_x64_oracle11g_rpm.tar.gz [root@vmdbs ~]# cd redhat6.5_x64_oracle11g_rpm[root@vmdbs redhat6.5_x64_oracle11g_rpm]# rpm -ivh --force --nodeps *.rpm 9 无需操作,看内容即可10 安装开始 请等待 11 Password Management其中sys sysdba密码均为oracle 12 执行提示脚本123[oracle@vmdbs database]$ su - root[root@vmdbs ~]# sh /opt/oracle/oraInventory/orainstRoot.sh [root@vmdbs ~]# sh /opt/oracle/oracle/product/11.2.0/dbhome_1/root.sh 13 完成关闭 恭喜 大功告成 安装完成后Oracle Enterprise Manager(https://ip:1158/em)就可以打开,数据库已可以使用。重启服务器后,需手动启动 Oracle Enterprise Manager 服务:emctl start dbconsole, https://ip:1158/em 才可以打开。 4 启动和关闭 oracle 数据库步骤 启动 123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354[root@vmdbs ~]# su - oracle[oracle@vmdbs ~]$ lsnrctl startLSNRCTL for Linux: Version 11.2.0.1.0 - Production on 22-APR-2016 15:03:12Copyright (c) 1991, 2009, Oracle. All rights reserved.Starting /opt/oracle/oracle/product/11.2.0/dbhome_1/bin/tnslsnr: please wait...TNSLSNR for Linux: Version 11.2.0.1.0 - ProductionSystem parameter file is /opt/oracle/oracle/product/11.2.0/dbhome_1/network/admin/listener.oraLog messages written to /opt/oracle/oracle/diag/tnslsnr/vmdbs/listener/alert/log.xmlListening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=localhost)(PORT=1521)))Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1521)))STATUS of the LISTENER------------------------Alias LISTENERVersion TNSLSNR for Linux: Version 11.2.0.1.0 - ProductionStart Date 22-APR-2016 15:03:14Uptime 0 days 0 hr. 0 min. 0 secTrace Level offSecurity ON: Local OS AuthenticationSNMP OFFListener Parameter File /opt/oracle/oracle/product/11.2.0/dbhome_1/network/admin/listener.oraListener Log File /opt/oracle/oracle/diag/tnslsnr/vmdbs/listener/alert/log.xmlListening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521))) (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=localhost)(PORT=1521)))The listener supports no servicesThe command completed successfully[oracle@vmdbs ~]$ sqlplus /nologSQL*Plus: Release 11.2.0.1.0 Production on Fri Apr 22 15:05:59 2016Copyright (c) 1982, 2009, Oracle. All rights reserved.SQL> connect /as sysdbaConnected to an idle instance.SQL> startupORACLE instance started.Total System Global Area 830930944 bytesFixed Size 2217912 bytesVariable Size 499124296 bytesDatabase Buffers 327155712 bytesRedo Buffers 2433024 bytesDatabase mounted.Database opened.SQL> quit 如果大家的安装出现了问题,可以QQ联系我(QQ568946518),我有时间尽量帮大家解决。 关闭 12345678910111213[oracle@vmdbs ~]$ sqlplus /nologSQL*Plus: Release 11.2.0.1.0 Production on Fri Apr 22 15:14:32 2016Copyright (c) 1982, 2009, Oracle. All rights reserved.SQL> conn /as sysdbaConnected.SQL> shutdown immediateDatabase closed.Database dismounted.ORACLE instance shut down.SQL> quit 5 没有数据库,新建一个数据库5.1 建立监听[oracle@vmdbs ~]$ export LANG=en_us 安装中文版操作系统才需执行 [oracle@vmdbs ~]$ netca //必须在图形界面下的命令行下执行,不能远程执行 5.2 启动监听[oracle@vmdbs ~]$ lsnrctl start 5.3 建立数据库[oracle@vmdbs ~]$ export LANG=en_us //图形界面下执行 安装中文版操作系统才需执行 [oracle@vmdbs ~]$ dbca 网卡 + csdn博客图片只能一张一张上传,还卡。。。。所以只能文字表述了。。。要看图片的请下载DayDayUP_Linux运维学习_oracle11g安装教程 http://download.csdn.net/download/erujo/9500427选项 1:一般用途或事务处理;选项 2:定制数据库;选项 3:数据仓库 SYS SYSTEM 密码均为oracle 6 已有一个数据库,再新建一个数据库在原有数据库基础上建立了第二个数据库,重启服务器后,在启动默认数据库的基础上(oracle 用户的.bash_profile 文件中定义的 ORACLE_SID),重新 export ORACLE_SID=第二数据库的 sid,重复 sqlplus /nolog、connect /as sysdba、startup,方可使用新建的数据库。数据库关闭操作亦如此。 6.1 首先启动监听12345678910111213141516171819202122232425262728293031[oracle@vmdbs ~]$ lsnrctl startLSNRCTL for Linux: Version 11.2.0.1.0 - Production on 22-APR-2016 15:34:18Copyright (c) 1991, 2009, Oracle. All rights reserved.Starting /opt/oracle/oracle/product/11.2.0/dbhome_1/bin/tnslsnr: please wait...TNSLSNR for Linux: Version 11.2.0.1.0 - ProductionSystem parameter file is /opt/oracle/oracle/product/11.2.0/dbhome_1/network/admin/listener.oraLog messages written to /opt/oracle/oracle/diag/tnslsnr/vmdbs/listener/alert/log.xmlListening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=localhost)(PORT=1521)))Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1521)))STATUS of the LISTENER------------------------Alias LISTENERVersion TNSLSNR for Linux: Version 11.2.0.1.0 - ProductionStart Date 22-APR-2016 15:34:19Uptime 0 days 0 hr. 0 min. 0 secTrace Level offSecurity ON: Local OS AuthenticationSNMP OFFListener Parameter File /opt/oracle/oracle/product/11.2.0/dbhome_1/network/admin/listener.oraListener Log File /opt/oracle/oracle/diag/tnslsnr/vmdbs/listener/alert/log.xmlListening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521))) (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=localhost)(PORT=1521)))The listener supports no servicesThe command completed successfully 6.2 启动第一个数据库12345678910111213141516171819[oracle@vmdbs ~]$ sqlplus /nologSQL*Plus: Release 11.2.0.1.0 Production on Fri Apr 22 15:34:30 2016Copyright (c) 1982, 2009, Oracle. All rights reserved.SQL> connect /as sysdbaConnected to an idle instance.SQL> startupORACLE instance started.Total System Global Area 830930944 bytesFixed Size 2217912 bytesVariable Size 499124296 bytesDatabase Buffers 327155712 bytesRedo Buffers 2433024 bytesDatabase mounted.Database opened.SQL> quit 6.3 启动第二个数据库123456789101112131415161718192021[oracle@vmdbs ~]$ export ORACLE_SID=newdb[oracle@vmdbs ~]$ export ORACLE_SID=newdb[oracle@vmdbs ~]$ sqlplus /nologSQL*Plus: Release 11.2.0.1.0 Production on Fri Apr 22 15:39:16 2016Copyright (c) 1982, 2009, Oracle. All rights reserved.SQL> conn /as sysdbaConnected to an idle instance.SQL> startupORACLE instance started.Total System Global Area 826753024 bytesFixed Size 2217872 bytesVariable Size 230688880 bytesDatabase Buffers 591396864 bytesRedo Buffers 2449408 bytesDatabase mounted.Database opened.SQL> quit]]></content>
<categories>
<category>数据库</category>
</categories>
<tags>
<tag>数据库</tag>
<tag>oracle</tag>
</tags>
</entry>
<entry>
<title><![CDATA[DayDayUP_大数据学习课程[1]_hadoop2.6.0完全分布式集群环境和伪分布式集群搭建]]></title>
<url>%2F2018%2F07%2F06%2FDayDayUP_%E5%A4%A7%E6%95%B0%E6%8D%AE%E5%AD%A6%E4%B9%A0%E8%AF%BE%E7%A8%8B%5B1%5D_hadoop2.6.0%E5%AE%8C%E5%85%A8%E5%88%86%E5%B8%83%E5%BC%8F%E9%9B%86%E7%BE%A4%E7%8E%AF%E5%A2%83%E5%92%8C%E4%BC%AA%E5%88%86%E5%B8%83%E5%BC%8F%E9%9B%86%E7%BE%A4%E6%90%AD%E5%BB%BA%2F</url>
<content type="text"><![CDATA[环境说明 系统 :Centos6.5软件版本: hadoop2.6.0 jdk1.8集群状态:master: www 192.168.78.110slave1: node1 192.168.78.111slave2: node2 192.168.78.112hosts 文件192.168.78.110 www192.168.78.111 node1192.168.78.112 node2确保三台机器之间互ping 主机名能ping通 下载 hadoop2.6.0 和jdk1.8 12[root@www ~]# wget http://download.oracle.com/otn-pub/java/jdk/8u65-b17/jdk-8u65-linux-x64.rpm?AuthParam=1446899640_8da8d9b13f8bbe63b3bc0bc80b730f55 //下载后将.rpm后面的乱码去掉[root@www ~]# wget http://mirror.bit.edu.cn/apache/hadoop/common/hadoop-2.6.2/hadoop-2.6.2.tar.gz 配置java环境 3.1 安装jdk1# rpm -ivh jdk-8u45-linux-i586.rpm 3.2 配置java环境变量1234567[root@www ~]# vimx /etc/profile#set java environmentexport JAVA_HOME=/usr/java/jdk1.8.0_45 //注意若下载了其他版本,注意变通export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jarexport PATH=$PATH:$JAVA_HOME/binexport JAVA_HOME CLASSPATH PATH[root@www ~]# source !$ 3.3 测试java环境1[root@www ~]# java -version java version "1.8.0_65" Java(TM) SE Runtime Environment (build 1.8.0_65-b17) Java HotSpot(TM) 64-Bit Server VM (build 25.65-b01, mixed mode)1[root@www ~]# javac -version javac 1.8.0_65 安装hadoop 4.1 解压安装123456[root@www opt]# tar -xzvf hadoop-2.6.2.tar.gz [root@www opt]# mkdir /opt/hadoop[root@www src]# mv hadoop-2.6.2 /opt/hadoop[root@www src]# cd /opt/hadoop/hadoop-2.6.2[root@www hadoop-2.6.2]# lsbin etc include lib libexec LICENSE.txt NOTICE.txt README.txt sbin share 4.2 添加hadoop用户123[root@www hadoop-2.6.2]# useradd hadoop[root@www hadoop-2.6.2]# passwd hadoop[root@www hadoop-2.6.2]# chown -R hadoop:hadoop /opt/hadoop 4.3 修改hadoop配置文件12345[root@www hadoop-2.6.2]# su - hadoop //切换为hadoop用户[hadoop@www ~]$ mkdir -p ~/hadoop/tmp ~/dfs/data ~/dfs/name //这些目录后期要用[hadoop@www ~]$ lsdfs hadoop[hadoop@www ~]$ cd /opt/hadoop/hadoop-2.6.2/ 4.3.1 配置 hadoop-env.sh文件–>修改JAVA_HOME123[hadoop@www hadoop-2.6.2]$ vimx etc/hadoop/hadoop-env.sh # The java implementation to use.export JAVA_HOME=/usr/java/jdk1.8.0_65 4.3.2 配置 yarn-env.sh 文件–>>修改JAVA_HOME123[hadoop@www hadoop-2.6.2]$ vimx etc/hadoop/yarn-env.sh # The java implementation to use.export JAVA_HOME=/usr/java/jdk1.8.0_65 4.3.3 配置slaves文件–>>增加slave节点123[hadoop@www hadoop-2.6.2]$ vimx etc/hadoop/slaves node1node2 4.3.4 配置 core-site.xml文件–>>增加hadoop核心配置(hdfs文件端口是9000、file:/home/hadoop/opt/hadoop-2.6.0/tmp、)123456789101112131415161718192021222324252627282930313233343536373839404142[hadoop@www hadoop-2.6.2]$ vimx etc/hadoop/core-site.xml <?xml version="1.0" encoding="UTF-8"?><?xml-stylesheet type="text/xsl" href="configuration.xsl"?><!-- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. See accompanying LICENSE file.--><!-- Put site-specific property overrides in this file. --><configuration> <property> <name>fs.defaultFS</name> <value>hdfs://www:9000</value> </property> <property> <name>io.file.buffer.size</name> <value>131072</value> </property> <property> <name>hadoop.tmp.dir</name> <value>file: /home/hadoop/hadoop/tmp</value> <description>Abasefor other temporary directories.</description> </property> <property> <name>hadoop.proxyuser.spark.hosts</name> <value>*</value> </property><property> <name>hadoop.proxyuser.spark.groups</name> <value>*</value> </property></configuration> 4.3.5 配置 hdfs-site.xml 文件–>>增加hdfs配置信息(namenode、datanode端口和目录位置)12345678910111213141516171819202122232425262728293031323334[hadoop@www hadoop-2.6.2]$ vimx etc/hadoop/hdfs-site.xml<configuration><property><name>dfs.namenode.secondary.http-address</name> <value>www:9001</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:/home/hadoop/dfs/data</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>file:/home/hadoop/dfs/name</value> </property> <property> <name>dfs.replication</name> <value>3</value> </property> <property> <name>dfs.webhdfs.enabled</name> <value>true</value> </property><property> <name>dfs.namenode.checkpoint.dir</name> <value>file:///home/hadoop/hadoop/hdfs/namesecondary</value> </property></configuration> 4.3.6 配置 mapred-site.xml 文件–>>增加mapreduce配置(使用yarn框架、jobhistory使用地址以及web地址)1234567891011121314151617181920[hadoop@www hadoop-2.6.2]$ cp etc/hadoop/mapred-site.xml.template etc/hadoop/mapred-site.xml[hadoop@www hadoop-2.6.2]$ vimx etc/hadoop/mapred-site.xml<configuration><property> <name>mapreduce.framework.name</name> <value>yarn</value></property><property> <name>mapreduce.jobhistory.address</name> <value>www:10020</value> </property> <property> <name>mapreduce.jobhistory.webapp.address</name> <value>www:19888</value> </property> <property> <name>mapreduce.jobtracker.staging.root.dir</name> <value>/home/hadoop/hadoop</value> </property></configuration> 4.3.7 配置 yarn-site.xml 文件–>>增加yarn功能1234567891011121314151617181920212223242526272829303132[hadoop@www hadoop-2.6.2]$ vimx etc/hadoop/yarn-site.xml<configuration> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property> <property> <name>yarn.resourcemanager.address</name> <value>www:8032</value> </property> <property> <name>yarn.resourcemanager.scheduler.address</name> <value>www:8030</value> </property> <property> <name>yarn.resourcemanager.resource-tracker.address</name> <value>www:8035</value> </property> <property> <name>yarn.resourcemanager.admin.address</name> <value>www:8033</value> </property> <property> <name>yarn.resourcemanager.webapp.address</name> <value>www:8088</value> </property></configuration> 4.3.8 将所有文件(hadoop2.6.0和hosts)复制到node1 和node2 上4.4.1 设置ssh免密码登陆在三台服务器上分别执行1234[hadoop@www ~]$ ssh-keygen -t rsa //直接回车不用设置密码[hadoop@node2 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub [email protected][hadoop@node2 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub [email protected][hadoop@node2 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub [email protected] 4.4.2 测试ssh免密码登录在三台服务器上分别执行123[hadoop@node2 ~]$ ssh www[hadoop@node2 ~]$ ssh node1[hadoop@node2 ~]$ ssh node2 验证hadoop5.1 格式化namenode123[hadoop@www hadoop-2.6.2]$ ./bin/hadoop namenode -format[hadoop@node1 hadoop-2.6.2]$ ./bin/hadoop namenode -format[hadoop@node2 hadoop-2.6.2]$ ./bin/hadoop namenode -format 5.2 启动hadoop启动所有1[hadoop@www hadoop-2.6.2]$ ./sbin/start-all.sh //任意一台执行即可 正确的进程情况master:12345[hadoop@www hadoop-2.6.2]$ jps7136 ResourceManager6993 SecondaryNameNode6819 NameNode7399 Jps slave:1234[hadoop@node1 hadoop-2.6.2]$ jps3186 Jps3064 NodeManager2974 DataNode 6 运行wordcount程序6.1 创建目录和文件12345[hadoop@node1 hadoop-2.6.2]$ mkdir input[hadoop@node1 hadoop-2.6.2]$ touch input/test.log[hadoop@node1 hadoop-2.6.2]$ echo "hello world hello hadoop" > input/test.log [hadoop@node1 hadoop-2.6.2]$ cat input/test.log hello world hello hadoop 6.2 在hdfs创建/input目录 [hadoop@node1 hadoop-2.6.2]$ ./bin/hadoop fs -mkdir /input 6.3 将test.log文件copy到hdfs /input目录 [hadoop@www hadoop-2.6.2]$ ./bin/hadoop fs -put input/ / 6.4 查看hdfs上是否有test.log文件 [hadoop@www hadoop-2.6.2]$ ./bin/hadoop fs -ls /input 15/11/08 17:59:55 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Found 1 items -rw-r--r-- 2 hadoop supergroup 25 2015-11-08 17:59 /input/test.log6.5 执行wordcount程序 [hadoop@www hadoop-2.6.2]$ ./bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.2.jar wordcount /input /output 6.6 查看结果 [hadoop@www hadoop-2.6.2]$ ./bin/hadoop fs -cat /output/part-r-00000 15/11/08 18:07:21 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable hadoop 1 hello 2 world 1 伪分布式集群环境的搭建只需修改namenode的两个文件7.1etc/hadoop/hdfs-site.xml12345678910111213141516171819202122232425262728293031323334[hadoop@www hadoop-2.6.2]$ vimx etc/hadoop/hdfs-site.xml <configuration><property><name>dfs.namenode.secondary.http-address</name> <value>www:9001</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:///home/hadoop/dfs/data</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>file:///home/hadoop/dfs/name</value> </property> <property> <name>dfs.replication</name> <value>1</value> </property> <property> <name>dfs.webhdfs.enabled</name> <value>true</value> </property><property> <name>dfs.namenode.checkpoint.dir</name> <value>file:///home/hadoop/hadoop/hdfs/namesecondary</value> </property></configuration> 7.2 etc/hadoop/slaves123[hadoop@www hadoop-2.6.2]$ vimx etc/hadoop/slaves ``` 7.3 格式化namenode [hadoop@www hadoop-2.6.2]$ ./bin/hadoop namenode -format17.4 启动 [hadoop@www hadoop-2.6.2]$ ./sbin/start-all.sh123456789107.5 查看进程``` [hadoop@www hadoop-2.6.2]$ jps4048 NameNode4545 NodeManager4130 DataNode4459 ResourceManager5469 Jps4286 SecondaryNameNode 7.6 上传文件 1[hadoop@www hadoop-2.6.2]$ ./bin/hadoop fs -put input/ / 7.7 运行wordcount 1[hadoop@www hadoop-2.6.2]$ ./bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.2.jar wordcount /input /output 7.8 查看执行结果 1[hadoop@www hadoop-2.6.2]$ ./bin/hadoop fs -cat /output/part-r-00000]]></content>
<categories>
<category>大数据</category>
<category>hadoop</category>
</categories>
<tags>
<tag>大数据</tag>
<tag>hadoop</tag>
</tags>
</entry>
</search>