一、声明
本文为学习笔记,转载请标明原文链接、作者、参考博文链接。
本文是基于惨绿少年博客的mongodb内容整理记录,有转载请标明惨绿少年博客的地址。
博客名:惨绿少年
网址:http://clsn.io
二、mongodb集群搭建
1.复制集
1.1 搭建环境
服务器
192.168.2.193:27017
192.168.2.194:27017
192.168.2.195:27017
系统
centos7
1.2 创建目录
分别在192.168.2.193-195三台机器创建目录:
mkdir -p /mongodb/conf mkdir -p /mongodb/data mkdir -p /mongodb/log
1.3 添加配置文件
分别在192.168.2.193-195三台机器创建配置文件:
cat >>/mongodb/conf/mongod.conf<<'EOF'
systemLog:
destination: file
path: /mongodb/log/mongodb.log
logAppend: true
storage:
journal:
enabled: true
dbPath: /mongodb/data
directoryPerDB: true
#engine: wiredTiger
wiredTiger:
engineConfig:
# cacheSizeGB: 1
directoryForIndexes: true
collectionConfig:
blockCompressor: zlib
indexConfig:
prefixCompression: true
processManagement:
fork: true
net:
port: 27017
replication:
oplogSizeMB: 2048
replSetName: my_repl
EOF
1.4 启动服务
分别启动192.168.2.193-195三台服务器的mongodb进程
mongod -f /mongodb/conf/mongod.conf
1.5 配置复制集
shell> mongo --port 27017
config = {_id: 'my_repl', members: [
{_id: 0, host: '192.168.2.193:27017'},
{_id: 1, host: '192.168.2.194:27017'},
{_id: 2, host: '192.168.2.195:27017'}
]
}
> rs.initiate(config)
1.6 复制集常用操作
1.6.1 查看复制集状态:
rs.status(); # 查看整体复制集状态
rs.isMaster(); # 查看当前是否是主节点
1.6.2 添加删除节点
rs.add("ip:port"); # 新增从节点
rs.addArb("ip:port"); # 新增仲裁节点
rs.remove("ip:port"); # 删除一个节点
1.6.3 配置延时节点
cfg=rs.conf()
cfg.members[2].priority=0
cfg.members[2].slaveDelay=120
cfg.members[2].hidden=true
注:这里的2是rs.conf()显示的顺序(除主库之外),非ID
重写复制集配置
rs.reconfig(cfg)
1.6.4 查看副本集的配置信息
rs.config()
1.6.5 查看副本集各成员的状态
my_repl:PRIMARY> rs.status()
1.6.6 插入数据
> use app
switched to db app
app> db.createCollection('a')
{ "ok" : 0, "errmsg" : "not master", "code" : 10107 }
1.6.7 查看副本节点
> rs.printSlaveReplicationInfo()
source: 192.168.1.22:27017
syncedTo: Thu May 26 2016 10:28:56 GMT+0800 (CST)
0 secs (0 hrs) behind the primary
2.分片集
2.1 搭建环境
服务器
192.168.2.193
192.168.2.194
192.168.2.195
服务器实例
192.168.2.193
服务实例:mongos 端口:28017
服务实例:config server 端口:28018
服务实例:shard1 端口:28019
服务实例:shard2 端口:28020
服务实例:shard3 端口:28021
192.168.2.194
服务实例:mongos 端口:28017
服务实例:config server 端口:28018
服务实例:shard1 端口:28019
服务实例:shard2 端口:28020
服务实例:shard3 端口:28021
192.168.2.195
服务实例:mongos 端口:28017
服务实例:config server 端口:28018
服务实例:shard1 端口:28019
服务实例:shard2 端口:28020
服务实例:shard3 端口:28021
2.2 创建目录
mkdir -p /mongodb/conf/
mkdir -p /mongodb/log/
mkdir -p /mongodb/data/shard1
mkdir -p /mongodb/data/shard2
mkdir -p /mongodb/data/shard3
2.3 shard配置
2.3.1 shard1
ip:192.168.2.193
添加配置文件
cat > /mongodb/conf/shard1.conf <<'EOF'
systemLog:
destination: file
path: /mongodb/log/shard1.log
logAppend: true
storage:
journal:
enabled: true
dbPath: /mongodb/data/shard1
directoryPerDB: true
#engine: wiredTiger
wiredTiger:
engineConfig:
cacheSizeGB: 1
directoryForIndexes: true
collectionConfig:
blockCompressor: zlib
indexConfig:
prefixCompression: true
net:
bindIp: 192.168.2.193
port: 28019
replication:
oplogSizeMB: 2048
replSetName: shard1
sharding:
clusterRole: shardsvr
processManagement:
fork: true
EOF
ip:192.168.2.194
添加配置文件
cat > /mongodb/conf/shard1.conf <<'EOF'
systemLog:
destination: file
path: /mongodb/log/shard1.log
logAppend: true
storage:
journal:
enabled: true
dbPath: /mongodb/data/shard1
directoryPerDB: true
#engine: wiredTiger
wiredTiger:
engineConfig:
cacheSizeGB: 1
directoryForIndexes: true
collectionConfig:
blockCompressor: zlib
indexConfig:
prefixCompression: true
net:
bindIp: 192.168.2.194
port: 28019
replication:
oplogSizeMB: 2048
replSetName: shard1
sharding:
clusterRole: shardsvr
processManagement:
fork: true
EOF
ip:192.168.2.195 添加配置文件
cat > /mongodb/conf/shard1.conf <<'EOF'
systemLog:
destination: file
path: /mongodb/log/shard1.log
logAppend: true
storage:
journal:
enabled: true
dbPath: /mongodb/data/shard1
directoryPerDB: true
#engine: wiredTiger
wiredTiger:
engineConfig:
cacheSizeGB: 1
directoryForIndexes: true
collectionConfig:
blockCompressor: zlib
indexConfig:
prefixCompression: true
net:
bindIp: 192.168.2.194
port: 28019
replication:
oplogSizeMB: 2048
replSetName: shard1
sharding:
clusterRole: shardsvr
processManagement:
fork: true
EOF
2.3.2 shard2
ip:192.168.2.193
添加配置文件
cat > /mongodb/conf/shard2.conf <<'EOF'
systemLog:
destination: file
path: /mongodb/log/shard2.log
logAppend: true
storage:
journal:
enabled: true
dbPath: /mongodb/data/shard2
directoryPerDB: true
#engine: wiredTiger
wiredTiger:
engineConfig:
cacheSizeGB: 1
directoryForIndexes: true
collectionConfig:
blockCompressor: zlib
indexConfig:
prefixCompression: true
net:
bindIp: 192.168.2.193
port: 28020
replication:
oplogSizeMB: 2048
replSetName: shard1
sharding:
clusterRole: shardsvr
processManagement:
fork: true
EOF
ip:192.168.2.194
添加配置文件
cat > /mongodb/conf/shard2.conf <<'EOF'
systemLog:
destination: file
path: /mongodb/log/shard2.log
logAppend: true
storage:
journal:
enabled: true
dbPath: /mongodb/data/shard2
directoryPerDB: true
#engine: wiredTiger
wiredTiger:
engineConfig:
cacheSizeGB: 1
directoryForIndexes: true
collectionConfig:
blockCompressor: zlib
indexConfig:
prefixCompression: true
net:
bindIp: 192.168.2.194
port: 28020
replication:
oplogSizeMB: 2048
replSetName: shard1
sharding:
clusterRole: shardsvr
processManagement:
fork: true
EOF
ip:192.168.2.195
添加配置文件
cat > /mongodb/conf/shard2.conf <<'EOF'
systemLog:
destination: file
path: /mongodb/log/shard2.log
logAppend: true
storage:
journal:
enabled: true
dbPath: /mongodb/data/shard2
directoryPerDB: true
#engine: wiredTiger
wiredTiger:
engineConfig:
cacheSizeGB: 1
directoryForIndexes: true
collectionConfig:
blockCompressor: zlib
indexConfig:
prefixCompression: true
net:
bindIp: 192.168.2.195
port: 28020
replication:
oplogSizeMB: 2048
replSetName: shard1
sharding:
clusterRole: shardsvr
processManagement:
fork: true
EOF
2.3.3 shard3
ip:192.168.2.193
添加配置文件
cat > /mongodb/conf/shard3.conf <<'EOF'
systemLog:
destination: file
path: /mongodb/log/shard3.log
logAppend: true
storage:
journal:
enabled: true
dbPath: /mongodb/data/shard3
directoryPerDB: true
#engine: wiredTiger
wiredTiger:
engineConfig:
cacheSizeGB: 1
directoryForIndexes: true
collectionConfig:
blockCompressor: zlib
indexConfig:
prefixCompression: true
net:
bindIp: 192.168.2.193
port: 28021
replication:
oplogSizeMB: 2048
replSetName: shard1
sharding:
clusterRole: shardsvr
processManagement:
fork: true
EOF
ip:192.168.2.194
添加配置文件
cat > /mongodb/conf/shard3.conf <<'EOF'
systemLog:
destination: file
path: /mongodb/log/shard3.log
logAppend: true
storage:
journal:
enabled: true
dbPath: /mongodb/data/shard3
directoryPerDB: true
#engine: wiredTiger
wiredTiger:
engineConfig:
cacheSizeGB: 1
directoryForIndexes: true
collectionConfig:
blockCompressor: zlib
indexConfig:
prefixCompression: true
net:
bindIp: 192.168.2.194
port: 28021
replication:
oplogSizeMB: 2048
replSetName: shard1
sharding:
clusterRole: shardsvr
processManagement:
fork: true
EOF
ip:192.168.2.195
添加配置文件
cat > /mongodb/conf/shard3.conf <<'EOF'
systemLog:
destination: file
path: /mongodb/log/shard3.log
logAppend: true
storage:
journal:
enabled: true
dbPath: /mongodb/data/shard3
directoryPerDB: true
#engine: wiredTiger
wiredTiger:
engineConfig:
cacheSizeGB: 1
directoryForIndexes: true
collectionConfig:
blockCompressor: zlib
indexConfig:
prefixCompression: true
net:
bindIp: 192.168.2.195
port: 28021
replication:
oplogSizeMB: 2048
replSetName: shard1
sharding:
clusterRole: shardsvr
processManagement:
fork: true
EOF
2.3.4 启动服务
192.168.2.193-192.168.2.195三台机器分别执行
mongod -f /mongodb/conf/shard1.conf
mongod -f /mongodb/conf/shard2.conf
mongod -f /mongodb/conf/shard3.conf
2.3.5 配置复制集
2.3.5.1 配置复制集1
mongo --host 192.168.2.193 --port 28019 admin
config = {_id: 'sh1', members: [
{_id: 0, host: '192.168.2.193:28019'},
{_id: 1, host: '192.168.2.194:28019'},
{_id: 2, host: '192.168.2.195:28019',"arbiterOnly":true}
]
}
初始化配置
rs.initiate(config)
2.3.5.2 配置复制集2
mongo --host 192.168.2.193 --port 28020 admin
config = {_id: 'sh2', members: [
{_id: 0, host: '192.168.2.193:28020'},
{_id: 1, host: '192.168.2.194:28020'},
{_id: 2, host: '192.168.2.195:28020',"arbiterOnly":true}
]
}
初始化配置
rs.initiate(config)
2.3.5.3 配置复制集3
mongo --host 192.168.2.193 --port 28021 admin
config = {_id: 'sh3', members: [
{_id: 0, host: '192.168.2.193:28021'},
{_id: 1, host: '192.168.2.194:28021'},
{_id: 2, host: '192.168.2.195:28021',"arbiterOnly":true}
]
}
初始化配置
rs.initiate(config)
2.4 config配置
2.4.1 config1
ip:192.168.2.193
添加配置文件
cat > /mongodb/conf/config.conf <<'EOF'
systemLog:
destination: file
path: /mongodb/log/config.log
logAppend: true
storage:
journal:
enabled: true
dbPath: /mongodb/data/config
directoryPerDB: true
#engine: wiredTiger
wiredTiger:
engineConfig:
cacheSizeGB: 1
directoryForIndexes: true
collectionConfig:
blockCompressor: zlib
indexConfig:
prefixCompression: true
net:
bindIp: 192.168.2.193
port: 28018
replication:
oplogSizeMB: 2048
replSetName: configReplSet
sharding:
clusterRole: configsvr
processManagement:
fork: true
EOF
启动服务
mongod -f /mongodb/conf/config.conf
2.4.2 config2
ip:192.168.2.194
添加配置文件
cat > /mongodb/conf/config.conf <<'EOF'
systemLog:
destination: file
path: /mongodb/log/config.log
logAppend: true
storage:
journal:
enabled: true
dbPath: /mongodb/data/config
directoryPerDB: true
#engine: wiredTiger
wiredTiger:
engineConfig:
cacheSizeGB: 1
directoryForIndexes: true
collectionConfig:
blockCompressor: zlib
indexConfig:
prefixCompression: true
net:
bindIp: 192.168.2.194
port: 28018
replication:
oplogSizeMB: 2048
replSetName: configReplSet
sharding:
clusterRole: configsvr
processManagement:
fork: true
EOF
启动服务
mongod -f /mongodb/conf/config.conf
2.4.3 config3
ip:192.168.2.195 添加配置文件
cat > /mongodb/conf/config.conf <<'EOF'
systemLog:
destination: file
path: /mongodb/log/config.log
logAppend: true
storage:
journal:
enabled: true
dbPath: /mongodb/data/config
directoryPerDB: true
#engine: wiredTiger
wiredTiger:
engineConfig:
cacheSizeGB: 1
directoryForIndexes: true
collectionConfig:
blockCompressor: zlib
indexConfig:
prefixCompression: true
net:
bindIp: 192.168.2.195
port: 28018
replication:
oplogSizeMB: 2048
replSetName: configReplSet
sharding:
clusterRole: configsvr
processManagement:
fork: true
EOF
启动服务
mongod -f /mongodb/conf/config.conf
2.4.4 配置congfig集群
mongo --host 192.168.2.193 --port 28018 admin
config = {_id: 'configReplSet', members: [
{_id: 0, host: '192.168.2.193:28018'},
{_id: 1, host: '192.168.2.194:28018'},
{_id: 2, host: '192.168.2.195:28018'}
]
}
初始化配置
rs.initiate(config)
2.5 mongos配置
2.5.1 mongos1
ip:192.168.2.193 添加配置文件
cat > /mongodb/conf/mongos.conf <<'EOF'
systemLog:
destination: file
path: /mongodb/log/mongos.log
logAppend: true
net:
bindIp: 192.168.2.193
port: 28017
sharding:
configDB: configReplSet/192.168.2.193:28018,192.168.2.194:28018,192.168.2.195:28018
processManagement:
fork: true
EOF
启动服务
mongod -f /mongodb/conf/mongos.conf
2.5.2 mongos2
ip:192.168.2.194
添加配置文件
cat > /mongodb/conf/mongos.conf <<'EOF'
systemLog:
destination: file
path: /mongodb/log/mongos.log
logAppend: true
net:
bindIp: 192.168.2.194
port: 28017
sharding:
configDB: configReplSet/192.168.2.193:28018,192.168.2.194:28018,192.168.2.195:28018
processManagement:
fork: true
EOF
启动服务
mongod -f /mongodb/conf/mongos.conf
2.5.3 mongos3
ip:192.168.2.195
添加配置文件
cat > /mongodb/conf/mongos.conf <<'EOF'
systemLog:
destination: file
path: /mongodb/log/mongos.log
logAppend: true
net:
bindIp: 192.168.2.195
port: 28017
sharding:
configDB: configReplSet/192.168.2.193:28018,192.168.2.194:28018,192.168.2.195:28018
processManagement:
fork: true
EOF
启动服务
mongod -f /mongodb/conf/mongos.conf
2.6 添加分片节点
mongo 192.168.2.193:28017/admin
db.runCommand( { addshard : "sh1/192.168.2.193:28019,192.168.2.194:28019,192.168.2.195:28019",name:"shard1" } )
db.runCommand( { addshard : "sh2/192.168.2.193:28020,192.168.2.194:28020,192.168.2.195:28020",name:"shard2" } )
db.runCommand( { addshard : "sh3/192.168.2.193:28021,192.168.2.194:28021,192.168.2.195:28021",name:"shard3" } )
2.7 数据库分片配置
语法:( { enablesharding : “数据库名称” } )
mongos> db.runCommand( { enablesharding : "test" } )
2.8 分片集群的操作
2.8.1 不同分片键的配置
范围片键
admin> sh.shardCollection("数据库名称.集合名称",key : {分片键: 1} )
或
admin> db.runCommand( { shardcollection : "数据库名称.集合名称",key : {分片键: 1} } )
eg:
admin > sh.shardCollection("test.vast",key : {id: 1} )
或
admin> db.runCommand( { shardcollection : "test.vast",key : {id: 1} } )
哈希片键
admin > sh.shardCollection( "数据库名.集合名", { 片键: "hashed" } )
创建哈希索引
admin> db.vast.ensureIndex( { a: "hashed" } )
admin > sh.shardCollection( "test.vast", { a: "hashed" } )
2.8.2 分片集群的操作
判断是否Shard集群
admin> db.runCommand({ isdbgrid : 1 })
列出所有分片信息
admin> db.runCommand({ listshards : 1 })
列出开启分片的数据库
admin> use config
config> db.databases.find( { "partitioned": true } )
config> db.databases.find() //列出所有数据库分片情况
查看分片的片键
config> db.collections.find()
{
"_id" : "test.vast",
"lastmodEpoch" : ObjectId("58a599f19c898bbfb818b63c"),
"lastmod" : ISODate("1970-02-19T17:02:47.296Z"),
"dropped" : false,
"key" : {
"id" : 1
},
"unique" : false
}
查看分片的详细信息
admin> db.printShardingStatus()
或
admin> sh.status()
删除分片节点
sh.getBalancerState()
mongos> db.runCommand( { removeShard: "shard2" } )
2.8.3 balance操作
查看mongo集群是否开启了 balance 状态
# mongos> sh.getBalancerState()
true
当然你也可以通过在路由节点mongos上执行sh.status() 查看balance状态。
如果balance开启,查看是否正在有数据的迁移 连接mongo集群的路由节点
# mongos> sh.isBalancerRunning()
false
2.8.3.1 设置balance 窗口
(1)连接mongo集群的路由节点
(2)切换到配置节点
use config
(3)确定balance 开启中
sh.getBalancerState()
如果未开启,执行命令
sh.setBalancerState( true )
(4)修改balance 窗口的时间
db.settings.update(
{ _id: "balancer" },
{ $set: { activeWindow : { start : "<start-time>", stop : "<stop-time>" } } },
{ upsert: true }
)
eg:
db.settings.update({ _id : "balancer" }, { $set : { activeWindow : { start : "00:00", stop : "5:00" } } }, true )
当你设置了activeWindow,就不能用sh.startBalancer() 启动balance
NOTE The balancer window must be sufficient to complete the migration of all data inserted during the day. As data insert rates can change based on activity and usage patterns, it is important to ensure that the balancing window you select will be sufficient to support the needs of your deployment.
(5)删除balance 窗口
use config
db.settings.update({ _id : "balancer" }, { $unset : { activeWindow : true } })
2.8.3.2 关闭balance
默认balance 的运行可以在任何时间,只迁移需要迁移的chunk,如果要关闭balance运行,停止一段时间可以用下列方法: (1) 连接到路由mongos节点
(2) 停止balance
sh.stopBalancer()
(3) 查看balance状态
sh.getBalancerState()
(4)停止balance 后,没有迁移进程正在迁移,可以执行下列命令
use config
while( sh.isBalancerRunning() ) {
print("waiting...");
sleep(1000);
}
2.8.3.3 重新打开balance
如果你关闭了balance,准备重新打开balance (1) 连接到路由mongos节点
(2) 打开balance
sh.setBalancerState(true)
如果驱动没有命令 sh.startBalancer(),可以用下列命令
use config
db.settings.update( { _id: "balancer" }, { $set : { stopped: false } } , { upsert: true } )
2.8.3.4 关于集合的balance
关闭某个集合的balance
sh.disableBalancing("students.grades")
打开某个集合的balance
sh.enableBalancing("students.grades")
确定某个集合的balance是开启或者关闭
db.getSiblingDB("config").collections.findOne({_id : "students.grades"}).noBalance;
2.8.3.5 问题解决
mongodb在做自动分片平衡的时候,或引起数据库响应的缓慢,可以通过禁用自动平衡以及设置自动平衡进行的时间来解决这一问题。 (1)禁用分片的自动平衡
// connect to mongos
> use config
> db.settings.update( { _id: "balancer" }, { $set : { stopped: true } } , true );
(2)自定义 自动平衡进行的时间段
// connect to mongos
> use config
> db.settings.update({ _id : "balancer" }, { $set : { activeWindow : { start : "21:00", stop : "9:00" } } }, true )