python基础

1
2
3
title: python-base
date: 2020-07-03 17:42:55
tags:

1. 获取键盘输入

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
input()  //默认获取的数据类型是字符串

打印名片:
name = input("请输入姓名:")
QQ = input("请输入QQ号码:")
print("="*10)
print("姓名:%s"%name)
print("QQ:%s"%QQ)
print("="*10)

print输出多个变量:
name = "assasin"
age = 20
address = "北京市"
print("姓名是:%s,年龄是%d,地址是:%s"%(name,age,address))

2. 查看系统关键字

1
2
import keyword
keyword.kwlist //所有系统关键系

3. 运算符

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
1. 简单运算符
+ - * / (加减乘除)
//(取商)
%(取余)
**(幂/次方)
2. 比较运算符
>= 大于或等于
<= 小于等于
== 等于
!= 不等于
3. 逻辑运算符
or 或者
and 并且
not 取反
示例1:
a = 30
if not(a > 0 and a < 50) :
print('0---50')
else :
print('不在0--50')
4.循环 whilefor
示例1:打印矩形
i = 1
while i <= 5:
j = 1
while j <= 5:
print("*",end="")
j += 1
print("")
i += 1
示例2:打印9*9乘法表
i = 1
while i <= 9:
j = 1
while j <= i:
print("%d * %d = %d "%(j,i,i*j),end="")
j += 1
print("")
i += 1
示例3:剪刀石头布
import random
player = int(input("请输入: (0剪刀) (1石头) (2布)"))
computer = random.randint(0,2)
if (player == 0 and computer == 2 ) or (player == 1 and computer == 0) or (player == 2 and computer == 1):
print("赢了...")
elif player == computer:
print("平局...")
else:
print("输了...")
5. breakcontinue
break 结束整个循环
continue 结束本次循环
6. for-else

4. 格式符号对照表

1
2
3
4
5
6
7
8
9
10
11
12
13
14
格式符号        转换
%c 字符
%s 通过str()字符串换来格式化
%i 有符号十进制整数
%d 有符号十进制整数
%u 无符号十进制整数
%o 八进制整数
%x 十六进制整数(小写字母)
%X 十六进制整数(大写字母)
$e 索引符号(小写'e')
%E 索引符号(大写'E')
%f 浮点实数
%g %f与%e的简写
%G %f与%E的简写

5. 数据类型 – 字符串

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
python中的数据类型: 数值 字符串 列表 元组 字典
字符串的组合: 字符串连接使用 + 进行连接
字符串的截取:下标|切片
字符串逆序: str[::-1]
常见操作: str.find(s) | str.rfind() //寻找s在字符串str中的下标
str.index() | str.rindex()
str.upper() //将字符串str转为大写
str.lower() //将字符串str转为小写
str.capitalize() //将字符串第一个字母转为大写
str.title() //将字符串每一个字母转为大写
str.count(s) //s在str中出现次数 若不存在返回0
str.replace(a,s)//将字符串中的a替换为s
str.replace(a,s,1)//将字符串中的a替换为s 只替换1次(从左往右)
str.split(s) //将字符串以s进行切割,返回列表
str.startswith(s) //判断字符串是否以s开头 返回bool值
str.endswith(s) //判断字符串是否以s结尾 返回bool值
str.ljust(20) // 将字符串str靠左对齐
str.rjust(20) // 将字符串str靠右对齐
str.center(20) // 将字符串str居中对齐
str.lstrip() //去掉字符串str左边空格
str.rstrip() //去掉字符串str右边空格
str.strip() //去掉字符串str两边空格
str.partition(s) //将字符串str以s进行切割 返回元组 从左边开始
str.partition(s) //将字符串str以s进行切割 返回元组 从右边开始
str.splitlines() //将字符串str以换行符进行切割 返回列表
str.isalpha() //判断是否是纯字母
str.isdigit() //判断是否是纯数字
str.isalnum() //判断是否是字母与数字
str.isspace() //判断是否只包含空格
str.join(a) //将列表a使用str连接

6. 数据类型 – 列表

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
列表的定义: var = []
列表的赠删改: append(s) //添加s至列表结尾
insert(位置,s) //添加s至列表指定位置
extend(s)//将两个列表合并
pop() //删除最后一个元素 返回删除元素
remove(s) //删除列表中s元素
del //使用下标删除 (del names[2])
对列表元素重新赋值进行修改
s in list //判断s是否在list中
s not in list //判断s是否不在list中
示例:名字管理系统
print("="*50)
print(" 名字管理系统 v8.0")
print(" 1. 添加一个新名字")
print(" 2. 删除一个名字")
print(" 3. 修改一个名字")
print(" 4. 查询一个名字")
print(" 5. 退出系统")
print("="*50)
names = []
while True:
num = int(input("请输入功能序号:"))
if num == 1:
new_name = input("请输入一个名字:")
names.append(new_name)
print(names)
elif num == 2:
del_name = input("请输入需要删除的名字:")
names.remove(new_name)
print(names)
elif num == 3:
update_name = input("请输入需要替换的名字:")
if update_name in names:
names.remove(update_name)
xin_name = input("请输入需要替换后的名字:")
names.append(xin_name)
print(names)
else:
print("对不起,查无此人")
elif num == 4:
find = input("请输入需要查询的名字:")
if find in names:
print("找到了")
else:
print("没找到")
elif num == 5:
break
else:
print("输入有误!")

列表中字典的排序
info = [{'name': '史斌', 'qq': '3143', 'address': '西安', 'tel': '502'}, {'name': '飞跃', 'qq': '142', 'address': '河南', 'tel': '602'}]
info.sort(key=lambda,x:x['name'])

7. 数据类型 – 元组

1
2
3
注意:元组与列表类似,但是!  元组的元素不能修改!
元组的定义: var = ()
注意:若元组中只有一个元素,加,!

8. 数据类型 – 字典

1
2
3
4
5
6
定义一个数据字典 var = {} 
字典操作: len() //字典中键值对的个数
keys() //字典中所有的键
values() //字典中所有的值
items() //返回键值对的 元组
示例:名片管理系统

9. 数据类型 – 函数

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
1.具有独立功能的代码块
def 函数名():
函数体
局部变量与全局变量:
global var 将变量声明为全局变量

2.缺省参数:
def 函数名(a,b = 22,,c = 33):
pass

3.不定长参数函数1:
def 函数名(a,b,*args):
pass
#args 元组形式保存

4.不定长参数函数2:
def 函数名(a,b,**kwargs):
pass
#kwargs 字典形式保存
示例:
a = (11,22,33)
b = {'name': '飞跃', 'qq': '142', 'address': '河南', 'tel': '011'}

def test(a,*args,**kwargs):
print(args)
print(kwargs)

test(1,*a,**b)
结果:# (11, 22, 33)
# {'name': '飞跃', 'qq': '142', 'address': '河南', 'tel': '011'}

5.引用传值
id(s) //产看变量s的内存地址

6.可变类性与不可变类性
可变类性:列表,字典,集合
不可变类性:数字,字符串,元组

7. 函数的递归
def jiecheng(num):
if num <= 1:
return num
num * jiecheng(num -1)
return num
res = jiecheng(2)
print(res)

8. 匿名函数
def test(a,b,func):
return func(a,b)

res = test(1,2,lambda x,y:x+y)
print(res)

9. 交换两个变量的值
a = 4
b = 5
a,b = b,a

a = a + b
b = a - b
a = a - b
或 借助第三个变量

10. 文件操作

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
1.打开文件
f = open('**.txt','w')
2. 文件关闭
f.close()

打开模式说明:
w:写模式,若不存在,则创建 |
r:读模式|
a:追加模式|
rb:二进制格式打开用于只读|
wb:二进制格式打开用于写入|
ab:二进制格式打开用于追加|
r+:打开开文件用于读写,文件指针放在文件开头|
w+:打开文件用于读写,若存在,则覆盖,若不存在,则创建|
a+:打开文件用于读写,若存在,文件指针放在文件结尾,若不存在,则创建|
rb+:二进制格式打开文件用于读写,文件指针放在文件开头|
wb+:二进制格式打开文件用于读写,文若存在,则覆盖,若不存在,则创建|
ab+:二进制格式打开文件用于追加,文件指针放在文件结尾,若不存在,则创建

3. 读文件
f.read()

4.写文件
f.write(要写的内容)

5.复制文件
file_name = input("请输入要复制的文件名")
f = open(file_name,'r')
position = new_file_name =file_name.rfind('.')
new_file_name = file_name[0:position] + '(附件)' + file_name[position:]
new_f = open(new_file_name,'w')
while True:
file_con = f.read(1024);
if len(file_con) == 0:
break
new_f.write(file_con)
f.close()
new_f.close()

6.大文件的处理方式
readlines() //以列表保存读取内容

7.文件的定位读写
f.seek()
f.tell()

8.文件夹/文件操作
import os
os.rename(原文件名,新文件名) //重命名文件
os.remove(文件名) //删除文件
os.mkdir()//创建文件夹
os.rmdir()//删除文件夹
os.getcwd() //返回当前操作文件的路径
os.chdir() //改变当前路径
os.listdir() //获取当前目录文件列表

9.批量重命名

11. 面向对象1

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
1. 类由三部分构成:
类的名称;
类的属性;
类的方法.
2. 类的定义
class 类名:
方法体 方法必须写self
属性
示例:
class Cat:
def eat(self):
print("猫咪在吃鱼")

def drink(self):
print("猫咪在喝水")

3. 对象的创建
tom = Cat()
tom.eat()

4.创建对象的属性
tom.name = '汤姆'
tom.age = 25

5.示例:
class Cat:
def eat(self):
print("猫咪在吃鱼")

def drink(self):
print("猫咪在喝水")

def introduce(self):
print("%s的年龄是:%d" %(self.name,self.age))

#创建对象并调用方法
tom = Cat()
tom.eat()
#创建属性
tom.name = '汤姆'
tom.age = 25
tom.introduce()

#创建其他对象
lanmao = Cat()
lanmao.name = '蓝猫'
lanmao.age = 15
lanmao.introduce()

6.__init__方法
作用:初始化对象
示例:
class Cat:
def __init__(self,new_name,new_age):
self.name = new_name
self.age = new_age

def eat(self):
print("猫咪在吃鱼")

def drink(self):
print("猫咪在喝水")

def introduce(self):
print("%s的年龄是:%d" %(self.name,self.age))

#创建对象并调用方法
tom = Cat('汤姆',25) #将对象的引用传递给self
#创建属性
tom.introduce()

#创建其他对象
lanmao = Cat('蓝猫',15)
lanmao.introduce()

7. __str__方法
作用:获取对象的描述信息
示例:
class Cat:

def __init__(self,new_name,new_age):
self.name = new_name
self.age = new_age

def __str__(self):
return "%s的年龄是%d"%(self.name,self.age)


def eat(self):
print("猫咪在吃鱼")

def drink(self):
print("猫咪在喝水")

def introduce(self):
print("%s的年龄是:%d" %(self.name,self.age))

#创建对象并调用方法
tom = Cat('汤姆',25)
print(tom)
#创建属性

#创建其他对象
lanmao = Cat('蓝猫',15)
print(lanmao)

8. 案例(烤地瓜):
class SweetPotato:

def __init__(self):
self.cookString = '生的'
self.cookLevel = 0
self.condiments = []

def __str__(self):
return "地瓜的状态%s(%d),添加的作料有:%s"%(self.cookString,self.cookLevel,str(self.condiments))

def cook(self,cook_time):
self.cookLevel += cook_time
if self.cookLevel >= 0 and self.cookLevel < 3:
self.cookString = "生的"
elif self.cookLevel >= 3 and self.cookLevel < 5:
self.cookString = "半生不熟"
elif self.cookLevel >= 5 and self.cookLevel < 8:
self.cookString = "好了"
elif self.cookLevel > 8:
self.cookString = '烤糊了'

def addCondiments(self,element):
self.condiments.append(element)

digua = SweetPotato()
print(digua)
digua.cook(1)
print(digua)
digua.cook(1)
digua.addCondiments('番茄酱')
print(digua)
digua.cook(1)
digua.addCondiments('大蒜')
print(digua)
digua.cook(1)
digua.addCondiments('肉末')
print(digua)

9. 案例2(存放家具):
class Home:

def __init__(self,area,type,address):
self.area = area
self.type = type
self.address = address
self.leftArea = area
self.content = []

def __str__(self):
return '房子的总面积是:%d,当前房子里有:%s,可用面积是:%d,户型是:%s,地址是:%s'%(self.area,str(self.content),self.leftArea,self.type,self.address)

def addItem(self,item):
self.leftArea -= item.get_area()
self.content.append(item.get_name())

class Bed:
def __init__(self,name,area):
self.name = name
self.area = area

def __str__(self):
return '创的品牌是:%s,面积是:%d'%(self.name,self.area)

def get_area(self):
return self.area

def get_name(self):
return self.name

house = Home(150,'三室一厅','北京市朝阳区')
print(house)

bed1 = Bed('水晶家纺',4)
print(bed1)

house.addItem(bed1)
print(house)

bed2 = Bed('席梦思',3)
house.addItem(bed2)
print(house)

bed3 = Bed('婴儿床',1)
house.addItem(bed3)
print(house)

10. 隐藏属性
使用方法代替直接设置属性
示例:
class Dog:
def set_age(self,new_age):
if new_age > 0 and new_age <= 10:
self.age = new_age
else:
self.age = 0

def get_age(self):
return self.age

dog = Dog()
dog.set_age(25)
age = dog.get_age()
print(age)

11 私有方法
类外不可访问的方法 __def(self)
示例:
class Dog:
def set_age(self,new_name):
self.name = new_name
self.__age = 0 #私有属性

def __send(self): #私有方法 类外不可访问
print("发送短信"*20)

def send_msg(self,money):
if money > 100:
print("="*20)
self.__send()
else:
print("余额不足")
dog = Dog()
dog.send_msg(200)

12. __del__方法
对象被删除前自动调用

13.测量对象的引用个数
import sys
sys.getrefcount(var) 比实际多 1

14. 继承 子类(父类)
示例:
class Animal:
def eat(self):
print("...吃饭...")

def drink(self):
print('...喝水...')

def sleep(self):
print("...睡觉...")

def run(self):
print("...跑跑...")

class Dog(Animal): #继承Animal基类
def bark(self):
print("...旺旺叫...")

class Xiaotian(Dog): #继承父类Dog类
def fly(self):
print("..飞啊飞....")

a = Animal()
wangcai = Dog()
wangcai.bark()
wangcai.run()
xiaotian = Xiaotian()
xiaotian.fly()
xiaotian.bark()
xiaotian.eat()

15. 重写
class Xiaotian(Dog): #继承父类Dog类
def fly(self):
print("..飞啊飞....")

def bark(self): #重写父类方法
print("...狂叫....")

16. 调用被重写的方法

class Xiaotian(Dog): #继承父类Dog类
def fly(self):
print("..飞啊飞....")

def bark(self): #重写父类方法
print("...狂叫....")
#调用被重写的父类方法1
Dog.bark(self) #必须写self
#调用被重写的父类方法2
super().bark()

17. 私有属性,私有方法在继承中的表现

私有方法不能被继承 | 私有属性不能被继承.
但是!如果调用的是继承的父类中的共有方法,可以再这个共有方法中访问父类中的私有属性与私有方法;
但是!如果在子类中实现了一个共有方法,那么这个方法是不能调用继承的父类中的私有属性与私有方法.

18. 多继承
子类有多个父类,并且具有它们的特征
object 是python3的最终类
class Base(): #默认继承 经典类
class Base(object): #新式类
class C(A,B): #多继承类 同时拥有A,B类的特征 ',':逗号隔开即可

多继承注意点: 类名.__mro__ 获取调取顺序

19.类的多态
示例:
class Dog(object):

def print_self(self):
print("大家好,希望多多关照")

class Xiaotian(Dog):
def print_self(self):
print("hello everybody...")

def introduce(temp):
temp.print_self()

dog1 = Dog()
dog2 = Xiaotian()
introduce(dog1)
introduce(dog2)

20. 类属性 实例属性
实例属性与具体的某个实例对象有关;并且一个实例对象与另一个实例对象是不共享属性的;
类属性所属于类对象,并且多个示例对象之间共享同一个类属性
示例:
class Tool(object):
#定义属性
num = 0 #类属性
def __init__(self,new_name):
#实例属性
self.name = new_name
Tool.num += 1 #获取类属性

tool1 = Tool('铁锹')
tool2 = Tool('铲子')
tool3 = Tool('水桶')
print(Tool.num)

21. 实例方法 类方法 静态方法
示例:
class Game(object):
#类属性
num = 0
#实例方法
def __init__(self):
self.name = '老王' #实例属性
#类方法
@classmethod
def add_num(cls):
cls.num = 100

#静态方法
@staticmethod
def print_menu():
print("------------------")
print("穿越火线")
print("开始游戏")
print("结束游戏")
print("------------------")


game = Game()

#调类方法
Game.add_num() #可以通过类的名字调用类方法
game.add_num() #还可以通过这个类创建出的对象调用这个类方法
print(Game.num)

#调用静态方法
Game.print_menu() #1. 通过类名
game.print_menu() #2. 通过实例对象调用

22 调用父类方法:
父类名称.父类方法()
super().父类方法()
super(当前类名称,self).父类方法()

12. 面向对象2

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
1. 设计一个类(4S店) 简单工厂模式
2. 工厂方法模式
3. __new__方法
示例:
class Dog(object):

def __init__(self):
print("----------init----------")

def __del__(self):
print("----------del----------")

def __str__(self):
print("----------str----------")

def __new__(cls): #cls此时是Dog指向的类对象
# print(id(cls))
print("----------new----------")
return object.__new__(cls)

#print(id(Dog))
xiaotian = Dog()

4. 单例对象
示例:
class Dog(object):

__instance = None

def __new__(cls):
if cls.__instance == None:
cls.__instance = object.__new__(cls)
return cls.__instance
else:
return cls.__instance
a = Dog()
print(id(a))
b = Dog()
print(id(b))

5.只初始化一次
示例:
class Dog(object):

__instance = None
__init_flag = False

def __new__(cls,name):
if cls.__instance == None:
cls.__instance = object.__new__(cls)
return cls.__instance
else:
return cls.__instance

def __init__(self,name):
if Dog.__init_flag == False:
self.name = name
Dog.__init_flag = True

a = Dog('旺财')
print(id(a))
print(a.name)

b = Dog('哮天犬')
print(id(b))
print(b.name)

6. 异常及其处理
异常定义:
try: #可能发生的异常
print(a)
print("---------")
except NameError: #异常名称
print("如果捕获到异常后做的处理....")

print("--------------")


异常处理:
try: #可能发生的异常
11/0
open('xxx.txt','r')
print(a)
print("---------")
except (NameError,FileNotFoundError):
print("如果捕获到异常后做的处理....")
except Exception as ret: #捕获所有异常 可以起一个别名
print("如果用了Exception,只要以上没有捕获到异常,此except一定会捕获到")
print(ret)
else:
print("没有异常就会执行")
finally:
print("不管有无异常最终都会执行.....")

print("--------------")

抛出自定义异常:
class ShortInputException(Exception):
"""自定义的异常类"""
def __init__(self,length,atleast):
self.length = length
self.atleast = atleast


def main():
try:
s = input("请输入--->")
if len(s) < 3:
raise ShortInputException(len(s),3)
except ShortInputException as result:
print("ShortInputException:输入的长度是:%d,长度至少是:%d"%(result.length,result.atleast))
else:
print("没有异常发生")

main()

异常处理中抛出异常:
class Test(object):
def __init__(self,switch):
self.switch = switch #开关

def calculate(self,a,b):
try:
return a / b
except Exception as result:
if self.switch :
print("捕获开始,应捕获到异常,信息如下:")
print(result)
else:
raise

a = Test(True)
a.calculate(11,0)

print("----------------分割线---------------------")

a.switch = False
a.calculate(11,0)

7 if的真假判断
条件为真: 1|-1|"a"|
条件为假: ""|None|0|[]|{} 等价于 False

8. 导入使用自定义模块方法
1.
# import new_mode #无需写文件后缀
#
# new_mode.test()
#或者
2.
from new_mode import test
test()
#或者
3.
#from new_mode import test,test1
from new_mode import * #尽量少用 若方法名相同后导入的会覆盖前导入
4. import time as tt #给模块起别名

13. 其他

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
1. __all__
模块中__all__ = ['模块中功能1','模块中功能2',...]
防止 from *** import * 导入不需要的功能
2.__init__.py
将多个模块文件放入文件夹中(在文件夹中创建__init__.py)
那么这个文件夹整体就是一个包.

__init__.py中内容:
__all__ = ['功能1','功能2',...]
或者
还可以写一些独立的功能(方法) 只针对python2
或者
from . inport 模块 从当前路径导入

3. 模块的发布与下载
1.在包的同级目录下新建setup.py
内容:
from distutils.core import setup
setup(name="名称",version="1.0",description="描述信息",author="assasin",py_modules=["包名.模块名","包名.模块名"])
2. python setup.py build
3. python setup.py sdist
4. 安装 python setup.py install

4. 给程序传参数
python xxx.py 参数1 参数2 参数3 ... 空格隔开

xxx.py
import sys
sys.argv 接收参数 列表形式

5. 列表生成式(推导式)
range(1,5) 从15-1 是列表
range的风险: 占用内存空间 python2中!
如获取1--17的列表:
示例:
a = [i for i in range(1,17)]

c = [i for i in range(10) if i%2==0]
# [0, 2, 4, 6, 8]
d = [i for i in range(3) for j in range(2)]
# [0, 0, 1, 1, 2, 2]
d = [(i,j) for i in range(3) for j in range(2)]
# [(0, 0), (0, 1), (1, 0), (1, 1), (2, 0), (2, 1)]

6.集合,元组,列表
集合定义: c = {11,22,33}
集合中的元素不能重复!
a = []
set(a) #将列表a转为集合

14. 飞机大战游戏

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
import pygame
import time
from pygame.locals import *
import random

class Base(object):
def __init__(self,screen_temp,x,y,image_name):
self.x = x
self.y = y
self.screen = screen_temp
self.image = pygame.image.load(image_name)


class BasePlane(Base):

def __init__(self,screen_temp,x,y,image_name):
Base.__init__(self,screen_temp,x,y,image_name)
self.bullet_list = [] #存储发射出去的子弹对象引用

def display(self):
self.screen.blit(self.image, (self.x,self.y))
for bullet in self.bullet_list:
bullet.display()
bullet.move()
if bullet.judge():#判断子弹是否越界
self.bullet_list.remove(bullet)


class HeroPlane(BasePlane):

def __init__(self,screen_temp):
BasePlane.__init__(self,screen_temp,210,720,"./images/hero1.png")

def move_left(self):
self.x -= 5

def move_right(self):
self.x += 5

def fire(self):
self.bullet_list.append(Bullet(self.screen,self.x,self.y))

class EnemyPlane(BasePlane):
"""敌机"""
def __init__(self,screen_temp):
BasePlane.__init__(self, screen_temp, 0, 0, "./images/enemy0.png")
self.direction = 'right'

def move(self):
if self.direction == 'right':
self.x += 5
elif self.direction == 'left':
self.x -= 5

if self.x > 480 - 50: # 边界 - 飞机宽度
self.direction = 'left'
elif self.x < 0:
self.direction = 'right'


def fire(self):
random_num = random.randint(1,100)
if random_num == 8 or random_num == 20:
self.bullet_list.append(EnemyBullet(self.screen,self.x,self.y))


class BaseBullet(Base):
def __init__(self,screen_temp,x,y,image_name):
Base.__init__(self, screen_temp, x, y, image_name)

def display(self):
self.screen.blit(self.image,(self.x,self.y))

class Bullet(BaseBullet):

def __init__(self,screen_temp,x,y):
BaseBullet.__init__(self,screen_temp,x + 40,y -20,"./images/bullet.png")

def move(self):
self.y -= 5

def judge(self):
if self.y < 0:
return True
else:
return False

#敌机发射子弹
class EnemyBullet(BaseBullet):

def __init__(self,screen_temp,x,y):
BaseBullet.__init__(self, screen_temp, x + 25, y + 40, "./images/bullet1.png")

def move(self):
self.y += 5

def judge(self):
if self.y > 852:
return True
else:
return False


def key_control(hero_temp):
# 获取键盘输入事件
for event in pygame.event.get():
# 判断是都点击了退出按钮
if event.type == QUIT:
print("exit")
exit()

# 判断是否按下了键
if event.type == KEYDOWN:
# 检测是否按下a或left
if event.key == K_a or event.key == K_LEFT:
print("left")
hero_temp.move_left()
# 检测是否按下d或right
elif event.key == K_d or event.key == K_RIGHT:
print("right")
hero_temp.move_right()
# 检测按键是否是空格
elif event.key == K_SPACE:
print("space")
hero_temp.fire()

def main():
#1.创建一个窗口,用来显示内容
screen = pygame.display.set_mode((480,852),0,32)
#2.创建一个和矿口大小一样的图片,用来充当背景
background = pygame.image.load("./images/background.png")
#创建飞机对象
hero = HeroPlane(screen)
#创建一个敌机
enemy = EnemyPlane(screen)

#3.把背景图片显示到窗口中
while True:
#设定需要显示的背景图
screen.blit(background,(0,0))
hero.display() #玩家飞机显示
enemy.display() #敌机显示
enemy.move() #敌机移动
enemy.fire()#敌机开火
#显示玩家飞机
#更新需要显示的内容
pygame.display.update()

#键盘事件
key_control(hero)
#延时显示 降低CPU
time.sleep(0.01)


if __name__ == '__main__':
main()

15. 老王开枪

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
class  Person(object):
def __init__(self,name):
super(Person, self).__init__()
self.name = name
self.gun = None #保存枪对象的引用
self.hp = 100

#安装子弹到弹夹
def load(self,danjia_temp,bullet_temp):
danjia_temp.save(bullet_temp)

#安装弹夹到枪
def anzhuang(self,gun_temp,danjia_temp):
gun_temp.save(danjia_temp)

#老王拿枪
def handle(self,gun_temp):
self.gun = gun_temp

#老王开枪打敌人
def fire(self,diren_temp):
self.gun.kaipao(diren_temp)

def diaoxue(self,shashang):
self.hp -= shashang

def __str__(self):
if self.gun:
return "%s的血量是:%d,有枪,%s"%(self.name,self.hp,self.gun)
else:
if self.hp > 0 :
return "%s的血量是:%d,没有枪"%(self.name,self.hp)
else:
return "%s已经挂了"%(self.name)

class Gun(object):
def __init__(self,name):
super(Gun, self).__init__()
self.name = name #记录枪的类型
self.danjia = None #用来记录弹夹对象的引用

#保存弹夹对象的引用
def save(self,danjia_temp):
self.danjia = danjia_temp

def kaipao(self,diren_temp):
bullet = self.danjia.tan_zidan()
if bullet :
#子弹射杀敌人
bullet.kill(diren_temp)
else:
print("弹夹中没有子弹了...")


def __str__(self):
if self.danjia:
return "枪的信息:%s,%s"%(self.name,self.danjia)
else:
return "枪的信息:%s,枪中无子弹"%(self.name)

class Danjia(object):

def __init__(self,max_num):
super(Danjia, self).__init__()
self.max_num = max_num #记录弹夹的容量
self.bullet_list = [] #记录所有子弹的引用

#保存子弹到弹夹
def save(self,bullet_temp):
self .bullet_list.append(bullet_temp)

def tan_zidan(self):
if self.bullet_list :
return self.bullet_list.pop()
else:
return None

def __str__(self):
return "弹夹的信息:%d/%d"%(len(self.bullet_list),self.max_num)

class Bullet(object):
def __init__(self,shashang):
super(Bullet, self).__init__()
self.shashang = shashang #记录子弹的威力

def kill(self,diren_temp):
#敌人掉血 掉一颗子弹的威力
diren_temp.diaoxue(self.shashang)


def main():
'''控制整个程序'''
# 1.老王对象
laowang = Person('老王')
#2.枪对象
ak = Gun('Ak47')
#3.弹夹对象
danjia = Danjia(20)
#4.一些子弹对象
for i in range(15):
bullet = Bullet(10)
#5.老王-->子弹-->弹夹
laowang.load(danjia,bullet)
#测试弹夹信息
#print(danjia)
#测试赛枪的信息
#print(ak)
#6. 老王弹夹-->枪
laowang.anzhuang(ak,danjia)
#7.老王-->枪
laowang.handle(ak)
#测试老王对象
print(laowang)
#8.敌人对象
diren = Person('敌人')
print(diren)
#9. 老王开枪-->敌人
laowang.fire(diren)
print(diren)
print(laowang)
laowang.fire(diren)
print(diren)
print(laowang)
laowang.fire(diren)
print(diren)
print(laowang)
laowang.fire(diren)
print(diren)
print(laowang)
laowang.fire(diren)
print(diren)
print(laowang)
laowang.fire(diren)
print(diren)
print(laowang)
laowang.fire(diren)
print(diren)
print(laowang)
laowang.fire(diren)
print(diren)
print(laowang)



if __name__ == '__main__':
main()

配置Node.js环境

[TOC]

新项目需要使用node.js。 记录一下node.js和npm安装过程.

1. Mac本地安装Node.js

下载之前的node.js项目, 用phpstorm打开后报语法错误. 检查phpstorm的nodejs插件已启用. 判断是node.js环境未配置.

Mac安装配置node.js和npm.
  1. 访问node.js官网下载列表(https://nodejs.org/en/download/)

    下载推荐的node-v8.11.3.pkg包, 点击并根据提示进行安装. 安装完成后会显示软件的安装版本和安装位置, 如下所示.

    1
    2
    3
    	•	Node.js v8.11.3 to /usr/local/bin/node
    • npm v5.6.0 to /usr/local/bin/npm
    Make sure that /usr/local/bin is in your $PATH.

    可在控制台输入以下命令查看安装版本.

    1
    2
    3
    4
    $ node -v
    v8.11.3
    $ npm -v
    5.6.0
  2. 更换为淘宝的NPM镜像(http://npm.taobao.org)

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    //查看当前镜像
    $ npm get registry
    https://registry.npmjs.org/
    //更换为淘宝的镜像 -- 此方案安装cnpm, 不影响之前的npm
    sudo npm install -g cnpm --registry=https://registry.npm.taobao.org

    //添加成功后查看cnpm版本信息
    $ cnpm -v
    cnpm@6.0.0 (/usr/local/lib/node_modules/cnpm/lib/parse_argv.js)
    npm@6.2.0 (/usr/local/lib/node_modules/cnpm/node_modules/npm/lib/npm.js)
    node@8.11.3 (/usr/local/bin/node)
    npminstall@3.10.0 (/usr/local/lib/node_modules/cnpm/node_modules/npminstall/lib/index.js)
    prefix=/usr/local
    darwin x64 17.4.0
    registry=https://registry.npm.taobao.org
  3. 导入express框架及其相关模块

    1
    2
    //安装express并将其保存到依赖列表中
    cnpm install express --save
  4. 查看已安装过的包

    1
    2
    3
    4
    //全局查看
    npm list -g --depth 0
    //查看本地
    npm list --depth 0
  5. 卸载本地package

    1
    npm uninstall <package-name>
  6. 创建package.json

    1
    使用 npm init 即可在当前目录创建一个 package.json 文件:

    输入 npm init 后会弹出一堆问题,我们可以输入对应内容,也可以使用默认值。在回答一堆问题后输入 yes 就会生成图中所示内容的 package.json 文件。

    如果嫌回答这一大堆问题麻烦,可以直接输入 npm init --yes 跳过回答问题步骤,直接生成默认值的 package.json 文件:

2. centos6.5 nvm 安装升级nodejs

  1. 查看服务器nodejs版本

    1
    2
    3
    4
    # node -v
    v6.10.3
    # npm -v
    3.10.10
  2. 服务器以前版本是通过nvm安装管理的

    通过nvm安装升级指定nodejs

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    //查看可用node版本
    # nvm ls-remote
    v8.11.1
    v8.11.2
    -> v8.11.3
    v9.0.0

    //安装v8.11.3版本
    nvm install v8.11.3
    //安装完成后使用新版
    nvm use v8.11.3
    //卸载旧版本(只有在该版本未使用状态可卸载)
    nvm uninstall v6.10.3
    //将此版本设为默认
    nvm alias default v6.10.2

3. centos7 使用nvm方式安装nodejs

  1. 安装nvm 最新版本是0.33.11 git地址

  2. # curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.33.11/install.sh | bash
    or Wget:
    # wget -qO- https://raw.githubusercontent.com/creationix/nvm/v0.33.11/install.sh | bash
    
    1
    2

    3. 使配置生效
    source ~/.bash_profile
    1
    2

    4. 查看nvm版本
    nvm --version
    1
    2
    3
    4

    5. 查看可安装node版本

    查看可以安装的node版本,注意`(Latest LTS: Carbon)`标识, 此标识是长期支持的最新版本
    nvm list-remote
    1
    2

    6. 安装指定版本的node
    nvm install v8.12.0
    1
    2

    7. 确认安装结果
    node -v npm -v
    1
    2

    8. 切换版本
    nvm use v10.10.0
    1
    2

    9. 设置默认版本
    nvm alias default v10.10.0
    1
    2

    10. 查看所有已安装的版本
    nvm list
    1
    2
    3
    4
    5
    6
    7
    8

    ### 4. pm2使用(进程管理器)

    pm2 是一个带有负载均衡功能的Node应用的进程管理器.

    [pm2官网](https://pm2.io/doc/en/runtime/quick-start/)

    以npm方式安装**pm2** (全局安装)
    npm install pm2 -g
    1
    2

    以下是pm2常用的命令行
    $ pm2 start app.js # 启动app.js应用程序

$ pm2 start app.js -i 4 # cluster mode 模式启动4个app.js的应用实例 # 4个应用程序会自动进行负载均衡

$ pm2 start app.js –name=”api” # 启动应用程序并命名为 “api”

$ pm2 start app.js –watch # 当文件变化时自动重启应用

$ pm2 start script.sh # 启动 bash 脚本

$ pm2 list # 列表 PM2 启动的所有的应用程序

$ pm2 monit # 显示每个应用程序的CPU和内存占用情况

$ pm2 show [app-name] # 显示应用程序的所有信息

$ pm2 logs # 显示所有应用程序的日志

$ pm2 logs [app-name] # 显示指定应用程序的日志

$ pm2 flush

$ pm2 stop all # 停止所有的应用程序

$ pm2 stop 0 # 停止 id为 0的指定应用程序

$ pm2 restart all # 重启所有应用

$ pm2 reload all # 重启 cluster mode下的所有应用

$ pm2 gracefulReload all # Graceful reload all apps in cluster mode

$ pm2 delete all # 关闭并删除所有应用

$ pm2 delete 0 # 删除指定应用 id 0

$ pm2 scale api 10 # 把名字叫api的应用扩展到10个实例

$ pm2 reset [app-name] # 重置重启数量

$ pm2 startup # 创建开机自启动命令

$ pm2 save # 保存当前应用列表

$ pm2 resurrect # 重新加载保存的应用列表

$ pm2 update # Save processes, kill PM2 and restore processes

$ pm2 generate # Generate a sample json configuration file

$ pm2 deploy app.json prod setup # Setup “prod” remote server

$ pm2 deploy app.json prod # Update “prod” remote server

$ pm2 deploy app.json prod revert 2 # Revert “prod” remote server by 2

1
2
3
4
5
6
7
8



### 5. pm2开多进程时与log4j冲突, 导致log4不输出日志. (log4js pm2 cluster配置)

修复方案:

安装pm2-intercom

pm2 install pm2-intercom

1
2

配置log4js.json的配置文件, 增加: `pm2: true`

“categories”:{
“default”:{“appenders”:[“console”,”log_info”],”level”:”info”},
“logInfo”:{“appenders”:[“console”,”log_info”],”level”:”info”},
“logErr”:{“appenders”:[“console”,”log_error”],”level”:”error”}
},
“pm2”: true




参考链接:

https://github.com/xiaozhongliu/node-api-seed/blob/master/util/logger.js

[pm2 logging problem with multiple applications](https://github.com/log4js-node/log4js-node/issues/547)

[集群模式PM2+Log4js log写入失败问题](https://juejin.im/entry/5a0cf3276fb9a0450167814f)

Redis-2019-study

1. 数据类型与主从配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
# 1. string
# string最大存储512M,string类型是二进制安全的,可以为任何数据.
# 设置键值 setex name 10 assasin

# hash hash用于存储对象,对象的格式为键值对
# 设置单个属性 HSET key field value # hset info name assasin
# 设置多个属性 HMSET key field value[field value...]
# 获取单个属性 HGET info name
# 获取全部键及属性: HGETALL key
# 获取所有键: HKEYS key
# 获取所有值: HVALS key
# 获取属性个数: HLEN key
# 判断属性是否存在: HEXISTS key field
# 删除属性及值: HDEL key field

# 2. list 列表的元素类型为string 按照插入顺序排序 在列表的头部或者尾部添加元素
# 在头部插入数据: LPUSH key value
# 在尾部插入数据: RPUSH key value
# 在一个元素的前|后插入新元素: LINSERT key BEFORE|AFTER pivot value
# 设置指定索引的元素值
# 索引是基于0的下标
# 索引可以是负数,表示偏移量是从list尾部开始计数,如-1表示列表的最后一个元素
# 移除并且返回 key 对应的 list 的第一个元素: LPOP key
# 移除并返回存于 key 的 list 的最后一个元素: RPOP key
# 返回存储在 key 的列表里指定范围内的元素 ;start 和 end 偏移量都是基于0的下标;偏移量也可以是负数,表示偏移量是从list尾部开始计数,如-1表示列表的最后一个元素: LRANGE key start stop


# 3. set 无序集合 元素为string类型 元素具有唯一性,不重复
# 添加元素: SADD key member [member ...]
# 返回key集合所有的元素: SMEMBERS key
# 返回集合元素个数: SCARD key
# 求多个集合的交集: SINTER key [key ...]
# 求某集合与其它集合的差集: SDIFF key [key ...]
# 求多个集合的合集: SUNION key [key ...]
# 判断元素是否在集合中: SISMEMBER key member


# 4. zset
# 特点:
# sorted set,有序集合
# 元素为string类型
# 元素具有唯一性,不重复
#每个元素都会关联一个double类型的score,表示权重,通过权重将元素从小到大排序,元素的score可以相同
# 添加: ZADD key score member [score member ...]
# 返回指定范围内的元素: ZRANGE key start stop
# 返回元素个数: ZCARD key
# 返回有序集key中,score值在min和max之间的成员: ZCOUNT key min max
# 返回有序集key中,成员member的score值: ZSCORE key member

# 5. hash




# 发布/订阅
# 特点:
# 发布者不是计划发送消息给特定的接收者(订阅者),而是发布的消息分到不同的频道,不需要知道什么样的订阅者订阅
# 订阅者对一个或多个频道感兴趣,只需接收感兴趣的消息,不需要知道什么样的发布者发布的
# 发布者和订阅者的解耦合可以带来更大的扩展性和更加动态的网络拓扑
# 客户端发到频道的消息,将会被推送到所有订阅此频道的客户端
# 客户端不需要主动去获取消息,只需要订阅频道,这个频道的内容就会被推送过来

# 消息的格式:
# 推送消息的格式包含三部分:
# part1:消息类型,包含三种类型
# subscribe,表示订阅成功
# unsubscribe,表示取消订阅成功
# message,表示其它终端发布消息
# 如果第一部分的值为subscribe,则第二部分是频道,第三部分是现在订阅的频道的数量
# 如果第一部分的值为unsubscribe,则第二部分是频道,第三部分是现在订阅的频道的数量,如果为0则表示当前没有订阅任何频道,当在Pub/Sub以外状态,客户端可以发出任何redis命令
# 如果第一部分的值为message,则第二部分是来源频道的名称,第三部分是消息的内容

# 订阅: SUBSCRIBE 频道名称 [频道名称 ...]
# 取消订阅,如果不写参数,表示取消所有订阅: UNSUBSCRIBE 频道名称 [频道名称 ...]
# 发布: PUBLISH 频道 消息


# 主从配置
# 一个master可以拥有多个slave,一个slave又可以拥有多个slave,如此下去,形成了强大的多级服务器集群架构.比如,将ip为192.168.1.10的机器作为主服务器,将ip为192.168.1.11的机器作为从服务器.
# 1. 设置主服务器的配置
bind 192.168.1.10
# 2. 设置从服务器的配置,注意:在slaveof后面写主机ip,再写端口,而且端口必须写 :
bind 192.168.1.11
slaveof 192.168.1.10 6379
# 注意: 分别重启主从服务
# 3. 在master和slave分别执行info命令,查看输出信息 在master上写数据
set hello world
# 4. 在slave上读数据
get hello

2. redis.config

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
# Redis configuration file example.
#
# Note that in order to read the configuration file, Redis must be
# started with the file path as first argument:
#
# ./redis-server /path/to/redis.conf

# Note on units: when memory size is needed, it is possible to specify
# it in the usual form of 1k 5GB 4M and so forth:
#
# 1k => 1000 bytes
# 1kb => 1024 bytes
# 1m => 1000000 bytes
# 1mb => 1024*1024 bytes
# 1g => 1000000000 bytes
# 1gb => 1024*1024*1024 bytes
#
# units are case insensitive so 1GB 1Gb 1gB are all the same.

################################## INCLUDES ###################################

# Include one or more other config files here. This is useful if you
# have a standard template that goes to all Redis servers but also need
# to customize a few per-server settings. Include files can include
# other files, so use this wisely.
#
# Notice option "include" won't be rewritten by command "CONFIG REWRITE"
# from admin or Redis Sentinel. Since Redis always uses the last processed
# line as value of a configuration directive, you'd better put includes
# at the beginning of this file to avoid overwriting config change at runtime.
#
# If instead you are interested in using includes to override configuration
# options, it is better to use include as the last line.
#
# include /path/to/local.conf
# include /path/to/other.conf

################################ GENERAL #####################################

# By default Redis does not run as a daemon. Use 'yes' if you need it.
# Note that Redis will write a pid file in /var/run/redis.pid when daemonized.
daemonize yes #以 后台进程运行 默认是 no

# When running daemonized, Redis writes a pid file in /var/run/redis.pid by
# default. You can specify a custom pid file location here.
pidfile /var/run/redis/redis-server.pid # 进程管道


# Accept connections on the specified port, default is 6379.
# If port 0 is specified Redis will not listen on a TCP socket.
port 6379

# TCP listen() backlog.
#
# In high requests-per-second environments you need an high backlog in order
# to avoid slow clients connections issues. Note that the Linux kernel
# will silently truncate it to the value of /proc/sys/net/core/somaxconn so
# make sure to raise both the value of somaxconn and tcp_max_syn_backlog
# in order to get the desired effect.
# 设置tcp的backlog,其实是一个链接队列,backlog队列综合 = 未完成三次握手队列 + 已完成三次握手队列. 糟糕并发环境下需要一个高backlog值来避免慢客户端连接问题.linux内核会将这个值缩减到/proc/sys/net/core/somaxconn的值,所以需要确认增大somaxconn和tcp_max_syn_backlog两个值来达到想要的效果.
tcp-backlog 511

# By default Redis listens for connections from all the network interfaces
# available on the server. It is possible to listen to just one or multiple
# interfaces using the "bind" configuration directive, followed by one or
# more IP addresses.
#
# Examples:
#
# bind 192.168.1.100 10.0.0.1
#bind 127.0.0.1

# Specify the path for the Unix socket that will be used to listen for
# incoming connections. There is no default, so Redis will not listen
# on a unix socket when not specified.
#
# unixsocket /var/run/redis/redis.sock
# unixsocketperm 700

# Close the connection after a client is idle for N seconds (0 to disable)
timeout 0

# TCP keepalive.
#
# If non-zero, use SO_KEEPALIVE to send TCP ACKs to clients in absence
# of communication. This is useful for two reasons:
#
# 1) Detect dead peers.
# 2) Take the connection alive from the point of view of network
# equipment in the middle.
#
# On Linux, the specified value (in seconds) is the period used to send ACKs.
# Note that to close the connection the double of the time is needed.
# On other kernels the period depends on the kernel configuration.
#
# A reasonable value for this option is 60 seconds.
# 单位是:秒,如果设置为0,则不会进行Keepalive检测,建议设置为60
tcp-keepalive 0

# Specify the server verbosity level.
# This can be one of:
# debug (a lot of information, useful for development/testing)
# verbose (many rarely useful info, but not a mess like the debug level)
# notice (moderately verbose, what you want in production probably)
# warning (only very important / critical messages are logged)
loglevel notice #日志级别

# Specify the log file name. Also the empty string can be used to force
# Redis to log on the standard output. Note that if you use standard
# output for logging but daemonize, logs will be sent to /dev/null
logfile /var/log/redis/redis-server.log # 日志文件位置

# To enable logging to the system logger, just set 'syslog-enabled' to yes,
# and optionally update the other syslog parameters to suit your needs.
# syslog-enabled no # 系统日志 开关是否把日志输出到syslog中

# Specify the syslog identity.
# syslog-ident redis # 指定syslog里的日志标志

# Specify the syslog facility. Must be USER or between LOCAL0-LOCAL7.
# syslog-facility local0 # 指定syslog设备,值可以是USER或LOCAL0-LOCAL7

# Set the number of databases. The default database is DB 0, you can select
# a different one on a per-connection basis using SELECT <dbid> where
# dbid is a number between 0 and 'databases'-1
databases 16

################################ SNAPSHOTTING ################################
#
# Save the DB on disk:
#
# save <seconds> <changes>
#
# Will save the DB if both the given number of seconds and the given
# number of write operations against the DB occurred.
#
# In the example below the behaviour will be to save:
# after 900 sec (15 min) if at least 1 key changed
# after 300 sec (5 min) if at least 10 keys changed
# after 60 sec if at least 10000 keys changed
#
# Note: you can disable saving completely by commenting out all "save" lines.
#
# It is also possible to remove all the previously configured save
# points by adding a save directive with a single empty string argument
# like in the following example:
#
# save "" # 禁用备份

save 900 1 # 900秒内有1次改动则备份
save 300 10 # 300秒内有10次改动则备份
save 60 10000 # 60秒内有10000次则备份

# By default Redis will stop accepting writes if RDB snapshots are enabled
# (at least one save point) and the latest background save failed.
# This will make the user aware (in a hard way) that data is not persisting
# on disk properly, otherwise chances are that no one will notice and some
# disaster will happen.
#
# If the background saving process will start working again Redis will
# automatically allow writes again.
#
# However if you have setup your proper monitoring of the Redis server
# and persistence, you may want to disable this feature so that Redis will
# continue to work as usual even if there are problems with disk,
# permissions, and so forth.
stop-writes-on-bgsave-error yes # 默认yes,如设置为no,表示不在乎数据不一致或有其他的手段发现和控制

# Compress string objects using LZF when dump .rdb databases?
# For default that's set to 'yes' as it's almost always a win.
# If you want to save some CPU in the saving child set it to 'no' but
# the dataset will likely be bigger if you have compressible values or keys.
rdbcompression yes # 对于储存到磁盘中的快照,可以设置是否进行压缩存储.redis会使用LZF压缩算法 进行压缩 若不想消耗CPU来进行压缩的话,可以设置为no关闭此项功能

# Since version 5 of RDB a CRC64 checksum is placed at the end of the file.
# This makes the format more resistant to corruption but there is a performance
# hit to pay (around 10%) when saving and loading RDB files, so you can disable it
# for maximum performances.
#
# RDB files created with checksum disabled have a checksum of zero that will
# tell the loading code to skip the check.
rdbchecksum yes # 在存储快照后,还可以让redis使用CRC64算法来进行数据校验,但这样做会增加大约10%的性能消耗,如果希望获取到最大的性能提升,可以关闭此项.

# The filename where to dump the DB
dbfilename dump.rdb # 备份文件名称

# The working directory.
#
# The DB will be written inside this directory, with the filename specified
# above using the 'dbfilename' configuration directive.
#
# The Append Only File will also be created inside this directory.
#
# Note that you must specify a directory here, not a file name.
dir /var/lib/redis

################################# REPLICATION #################################

# Master-Slave replication. Use slaveof to make a Redis instance a copy of
# another Redis server. A few things to understand ASAP about Redis replication.
#
# 1) Redis replication is asynchronous, but you can configure a master to
# stop accepting writes if it appears to be not connected with at least
# a given number of slaves.
# 2) Redis slaves are able to perform a partial resynchronization with the
# master if the replication link is lost for a relatively small amount of
# time. You may want to configure the replication backlog size (see the next
# sections of this file) with a sensible value depending on your needs.
# 3) Replication is automatic and does not need user intervention. After a
# network partition slaves automatically try to reconnect to masters
# and resynchronize with them.
#
# slaveof <masterip> <masterport>

# If the master is password protected (using the "requirepass" configuration
# directive below) it is possible to tell the slave to authenticate before
# starting the replication synchronization process, otherwise the master will
# refuse the slave request.
#
# masterauth <master-password>

# When a slave loses its connection with the master, or when the replication
# is still in progress, the slave can act in two different ways:
#
# 1) if slave-serve-stale-data is set to 'yes' (the default) the slave will
# still reply to client requests, possibly with out of date data, or the
# data set may just be empty if this is the first synchronization.
#
# 2) if slave-serve-stale-data is set to 'no' the slave will reply with
# an error "SYNC with master in progress" to all the kind of commands
# but to INFO and SLAVEOF.
#
slave-serve-stale-data yes

# You can configure a slave instance to accept writes or not. Writing against
# a slave instance may be useful to store some ephemeral data (because data
# written on a slave will be easily deleted after resync with the master) but
# may also cause problems if clients are writing to it because of a
# misconfiguration.
#
# Since Redis 2.6 by default slaves are read-only.
#
# Note: read only slaves are not designed to be exposed to untrusted clients
# on the internet. It's just a protection layer against misuse of the instance.
# Still a read only slave exports by default all the administrative commands
# such as CONFIG, DEBUG, and so forth. To a limited extent you can improve
# security of read only slaves using 'rename-command' to shadow all the
# administrative / dangerous commands.
slave-read-only yes

# Replication SYNC strategy: disk or socket.
#
# -------------------------------------------------------
# WARNING: DISKLESS REPLICATION IS EXPERIMENTAL CURRENTLY
# -------------------------------------------------------
#
# New slaves and reconnecting slaves that are not able to continue the replication
# process just receiving differences, need to do what is called a "full
# synchronization". An RDB file is transmitted from the master to the slaves.
# The transmission can happen in two different ways:
#
# 1) Disk-backed: The Redis master creates a new process that writes the RDB
# file on disk. Later the file is transferred by the parent
# process to the slaves incrementally.
# 2) Diskless: The Redis master creates a new process that directly writes the
# RDB file to slave sockets, without touching the disk at all.
#
# With disk-backed replication, while the RDB file is generated, more slaves
# can be queued and served with the RDB file as soon as the current child producing
# the RDB file finishes its work. With diskless replication instead once
# the transfer starts, new slaves arriving will be queued and a new transfer
# will start when the current one terminates.
#
# When diskless replication is used, the master waits a configurable amount of
# time (in seconds) before starting the transfer in the hope that multiple slaves
# will arrive and the transfer can be parallelized.
#
# With slow disks and fast (large bandwidth) networks, diskless replication
# works better.
repl-diskless-sync no

# When diskless replication is enabled, it is possible to configure the delay
# the server waits in order to spawn the child that transfers the RDB via socket
# to the slaves.
#
# This is important since once the transfer starts, it is not possible to serve
# new slaves arriving, that will be queued for the next RDB transfer, so the server
# waits a delay in order to let more slaves arrive.
#
# The delay is specified in seconds, and by default is 5 seconds. To disable
# it entirely just set it to 0 seconds and the transfer will start ASAP.
repl-diskless-sync-delay 5

# Slaves send PINGs to server in a predefined interval. It's possible to change
# this interval with the repl_ping_slave_period option. The default value is 10
# seconds.
#
# repl-ping-slave-period 10

# The following option sets the replication timeout for:
#
# 1) Bulk transfer I/O during SYNC, from the point of view of slave.
# 2) Master timeout from the point of view of slaves (data, pings).
# 3) Slave timeout from the point of view of masters (REPLCONF ACK pings).
#
# It is important to make sure that this value is greater than the value
# specified for repl-ping-slave-period otherwise a timeout will be detected
# every time there is low traffic between the master and the slave.
#
# repl-timeout 60

# Disable TCP_NODELAY on the slave socket after SYNC?
#
# If you select "yes" Redis will use a smaller number of TCP packets and
# less bandwidth to send data to slaves. But this can add a delay for
# the data to appear on the slave side, up to 40 milliseconds with
# Linux kernels using a default configuration.
#
# If you select "no" the delay for data to appear on the slave side will
# be reduced but more bandwidth will be used for replication.
#
# By default we optimize for low latency, but in very high traffic conditions
# or when the master and slaves are many hops away, turning this to "yes" may
# be a good idea.
repl-disable-tcp-nodelay no

# Set the replication backlog size. The backlog is a buffer that accumulates
# slave data when slaves are disconnected for some time, so that when a slave
# wants to reconnect again, often a full resync is not needed, but a partial
# resync is enough, just passing the portion of data the slave missed while
# disconnected.
#
# The bigger the replication backlog, the longer the time the slave can be
# disconnected and later be able to perform a partial resynchronization.
#
# The backlog is only allocated once there is at least a slave connected.
#
# repl-backlog-size 1mb

# After a master has no longer connected slaves for some time, the backlog
# will be freed. The following option configures the amount of seconds that
# need to elapse, starting from the time the last slave disconnected, for
# the backlog buffer to be freed.
#
# A value of 0 means to never release the backlog.
#
# repl-backlog-ttl 3600

# The slave priority is an integer number published by Redis in the INFO output.
# It is used by Redis Sentinel in order to select a slave to promote into a
# master if the master is no longer working correctly.
#
# A slave with a low priority number is considered better for promotion, so
# for instance if there are three slaves with priority 10, 100, 25 Sentinel will
# pick the one with priority 10, that is the lowest.
#
# However a special priority of 0 marks the slave as not able to perform the
# role of master, so a slave with priority of 0 will never be selected by
# Redis Sentinel for promotion.
#
# By default the priority is 100.
slave-priority 100

# It is possible for a master to stop accepting writes if there are less than
# N slaves connected, having a lag less or equal than M seconds.
#
# The N slaves need to be in "online" state.
#
# The lag in seconds, that must be <= the specified value, is calculated from
# the last ping received from the slave, that is usually sent every second.
#
# This option does not GUARANTEE that N replicas will accept the write, but
# will limit the window of exposure for lost writes in case not enough slaves
# are available, to the specified number of seconds.
#
# For example to require at least 3 slaves with a lag <= 10 seconds use:
#
# min-slaves-to-write 3
# min-slaves-max-lag 10
#
# Setting one or the other to 0 disables the feature.
#
# By default min-slaves-to-write is set to 0 (feature disabled) and
# min-slaves-max-lag is set to 10.

################################## SECURITY ###################################

# Require clients to issue AUTH <PASSWORD> before processing any other
# commands. This might be useful in environments in which you do not trust
# others with access to the host running redis-server.
#
# This should stay commented out for backward compatibility and because most
# people do not need auth (e.g. they run their own servers).
#
# Warning: since Redis is pretty fast an outside user can try up to
# 150k passwords per second against a good box. This means that you should
# use a very strong password otherwise it will be very easy to break.
#
# requirepass foobared
requirepass 19920308shibin

# Command renaming.
#
# It is possible to change the name of dangerous commands in a shared
# environment. For instance the CONFIG command may be renamed into something
# hard to guess so that it will still be available for internal-use tools
# but not available for general clients.
#
# Example:
#
# rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52
#
# It is also possible to completely kill a command by renaming it into
# an empty string:
#
# rename-command CONFIG ""
#
# Please note that changing the name of commands that are logged into the
# AOF file or transmitted to slaves may cause problems.

################################### LIMITS ####################################

# Set the max number of connected clients at the same time. By default
# this limit is set to 10000 clients, however if the Redis server is not
# able to configure the process file limit to allow for the specified limit
# the max number of allowed clients is set to the current file limit
# minus 32 (as Redis reserves a few file descriptors for internal uses).
#
# Once the limit is reached Redis will close all the new connections sending
# an error 'max number of clients reached'.
#
# maxclients 10000 # 最大客户端连接数

# Don't use more memory than the specified amount of bytes.
# When the memory limit is reached Redis will try to remove keys
# according to the eviction policy selected (see maxmemory-policy).
#
# If Redis can't remove keys according to the policy, or if the policy is
# set to 'noeviction', Redis will start to reply with errors to commands
# that would use more memory, like SET, LPUSH, and so on, and will continue
# to reply to read-only commands like GET.
#
# This option is usually useful when using Redis as an LRU cache, or to set
# a hard memory limit for an instance (using the 'noeviction' policy).
#
# WARNING: If you have slaves attached to an instance with maxmemory on,
# the size of the output buffers needed to feed the slaves are subtracted
# from the used memory count, so that network problems / resyncs will
# not trigger a loop where keys are evicted, and in turn the output
# buffer of slaves is full with DELs of keys evicted triggering the deletion
# of more keys, and so forth until the database is completely emptied.
#
# In short... if you have slaves attached it is suggested that you set a lower
# limit for maxmemory so that there is some free RAM on the system for slave
# output buffers (but this is not needed if the policy is 'noeviction').
#
# maxmemory <bytes> # 最大内存数量

# MAXMEMORY POLICY: how Redis will select what to remove when maxmemory
# is reached. You can select among five behaviors:
#
# 缓存策略 重要!!!
# volatile-lru -> remove the key with an expire set using an LRU algorithm
# latest recently use 使用LRU算法移除key,只对设置了过期时间的键
# allkeys-lru -> remove any key according to the LRU algorithm
# 使用LRU算法移除key
# volatile-random -> remove a random key with an expire set
# 在过期集合中移除随机的key,只对设置了过期时间的键
# allkeys-random -> remove a random key, any key
# 移除随机的key
# volatile-ttl -> remove the key with the nearest expire time (minor TTL)
# 移除那些ttl值最小的key,即那些最近要过期的key
# noeviction -> don't expire at all, just return an error on write operations
# 不进行移除.针对写操作,只是返回错误信息
#
# Note: with any of the above policies, Redis will return an error on write
# operations, when there are no suitable keys for eviction.
#
# At the date of writing these commands are: set setnx setex append
# incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd
# sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby
# zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby
# getset mset msetnx exec sort
#
# The default is:
#
# maxmemory-policy noeviction # 默认永不过期

# LRU and minimal TTL algorithms are not precise algorithms but approximated
# algorithms (in order to save memory), so you can tune it for speed or
# accuracy. For default Redis will check five keys and pick the one that was
# used less recently, you can change the sample size using the following
# configuration directive.
#
# The default of 5 produces good enough results. 10 Approximates very closely
# true LRU but costs a bit more CPU. 3 is very fast but not very accurate.
#
# maxmemory-samples 5 # 设置样本数量,LRU算法和TTL算法都并不是精确地算法,而是估算值,所以可以设置样本的大小,redis默认会检查这么多个key并选择其中LRU的那个.

############################## APPEND ONLY MODE ###############################
# AOF数据持久化
# By default Redis asynchronously dumps the dataset on disk. This mode is
# good enough in many applications, but an issue with the Redis process or
# a power outage may result into a few minutes of writes lost (depending on
# the configured save points).
#
# The Append Only File is an alternative persistence mode that provides
# much better durability. For instance using the default data fsync policy
# (see later in the config file) Redis can lose just one second of writes in a
# dramatic event like a server power outage, or a single write if something
# wrong with the Redis process itself happens, but the operating system is
# still running correctly.
#
# AOF and RDB persistence can be enabled at the same time without problems.
# If the AOF is enabled on startup Redis will load the AOF, that is the file
# with the better durability guarantees.
#
# Please check http://redis.io/topics/persistence for more information.

appendonly no # 默认关闭AOF数据持久化

# The name of the append only file (default: "appendonly.aof")

appendfilename "appendonly.aof"

# The fsync() call tells the Operating System to actually write data on disk
# instead of waiting for more data in the output buffer. Some OS will really flush
# data on disk, some other OS will just try to do it ASAP.
#
# Redis supports three different modes:
# AOF数据持久化策略
#
# no: don't fsync, just let the OS flush the data when it wants. Faster.
# always: fsync after every write to the append only log. Slow, Safest.
# everysec: fsync only one time every second. Compromise.
#
# The default is "everysec", as that's usually the right compromise between
# speed and data safety. It's up to you to understand if you can relax this to
# "no" that will let the operating system flush the output buffer when
# it wants, for better performances (but if you can live with the idea of
# some data loss consider the default persistence mode that's snapshotting),
# or on the contrary, use "always" that's very slow but a bit safer than
# everysec.
#
# More details please check the following article:
# http://antirez.com/post/redis-persistence-demystified.html
#
# If unsure, use "everysec".

# appendfsync always # 同步持久化 每次发生数据变更会被立即记录到磁盘 性能较差但数据完整性比较好
appendfsync everysec # 默认方式 异步操作 ,每秒记录 如果一秒内宕机,有数据丢失
# appendfsync no # no

# When the AOF fsync policy is set to always or everysec, and a background
# saving process (a background save or AOF log background rewriting) is
# performing a lot of I/O against the disk, in some Linux configurations
# Redis may block too long on the fsync() call. Note that there is no fix for
# this currently, as even performing fsync in a different thread will block
# our synchronous write(2) call.
#
# In order to mitigate this problem it's possible to use the following option
# that will prevent fsync() from being called in the main process while a
# BGSAVE or BGREWRITEAOF is in progress.
#
# This means that while another child is saving, the durability of Redis is
# the same as "appendfsync none". In practical terms, this means that it is
# possible to lose up to 30 seconds of log in the worst scenario (with the
# default Linux settings).
#
# If you have latency problems turn this to "yes". Otherwise leave it as
# "no" that is the safest pick from the point of view of durability.

no-appendfsync-on-rewrite no # 重写时是否可以运用Appendfsync,默认no即可,保证数据安全

# Automatic rewrite of the append only file.
# Redis is able to automatically rewrite the log file implicitly calling
# BGREWRITEAOF when the AOF log size grows by the specified percentage.
#
# This is how it works: Redis remembers the size of the AOF file after the
# latest rewrite (if no rewrite has happened since the restart, the size of
# the AOF at startup is used).
#
# This base size is compared to the current size. If the current size is
# bigger than the specified percentage, the rewrite is triggered. Also
# you need to specify a minimal size for the AOF file to be rewritten, this
# is useful to avoid rewriting the AOF file even if the percentage increase
# is reached but it is still pretty small.
#
# Specify a percentage of zero in order to disable the automatic AOF
# rewrite feature.

auto-aof-rewrite-percentage 100 # 设置重写的基准值
auto-aof-rewrite-min-size 64mb #设置重写的基准值

# An AOF file may be found to be truncated at the end during the Redis
# startup process, when the AOF data gets loaded back into memory.
# This may happen when the system where Redis is running
# crashes, especially when an ext4 filesystem is mounted without the
# data=ordered option (however this can't happen when Redis itself
# crashes or aborts but the operating system still works correctly).
#
# Redis can either exit with an error when this happens, or load as much
# data as possible (the default now) and start if the AOF file is found
# to be truncated at the end. The following option controls this behavior.
#
# If aof-load-truncated is set to yes, a truncated AOF file is loaded and
# the Redis server starts emitting a log to inform the user of the event.
# Otherwise if the option is set to no, the server aborts with an error
# and refuses to start. When the option is set to no, the user requires
# to fix the AOF file using the "redis-check-aof" utility before to restart
# the server.
#
# Note that if the AOF file will be found to be corrupted in the middle
# the server will still exit with an error. This option only applies when
# Redis will try to read more data from the AOF file but not enough bytes
# will be found.
aof-load-truncated yes

################################ LUA SCRIPTING ###############################

# Max execution time of a Lua script in milliseconds.
#
# If the maximum execution time is reached Redis will log that a script is
# still in execution after the maximum allowed time and will start to
# reply to queries with an error.
#
# When a long running script exceeds the maximum execution time only the
# SCRIPT KILL and SHUTDOWN NOSAVE commands are available. The first can be
# used to stop a script that did not yet called write commands. The second
# is the only way to shut down the server in the case a write command was
# already issued by the script but the user doesn't want to wait for the natural
# termination of the script.
#
# Set it to 0 or a negative value for unlimited execution without warnings.
lua-time-limit 5000

################################ REDIS CLUSTER ###############################
#
# ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
# WARNING EXPERIMENTAL: Redis Cluster is considered to be stable code, however
# in order to mark it as "mature" we need to wait for a non trivial percentage
# of users to deploy it in production.
# ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
#
# Normal Redis instances can't be part of a Redis Cluster; only nodes that are
# started as cluster nodes can. In order to start a Redis instance as a
# cluster node enable the cluster support uncommenting the following:
#
# cluster-enabled yes

# Every cluster node has a cluster configuration file. This file is not
# intended to be edited by hand. It is created and updated by Redis nodes.
# Every Redis Cluster node requires a different cluster configuration file.
# Make sure that instances running in the same system do not have
# overlapping cluster configuration file names.
#
# cluster-config-file nodes-6379.conf

# Cluster node timeout is the amount of milliseconds a node must be unreachable
# for it to be considered in failure state.
# Most other internal time limits are multiple of the node timeout.
#
# cluster-node-timeout 15000

# A slave of a failing master will avoid to start a failover if its data
# looks too old.
#
# There is no simple way for a slave to actually have a exact measure of
# its "data age", so the following two checks are performed:
#
# 1) If there are multiple slaves able to failover, they exchange messages
# in order to try to give an advantage to the slave with the best
# replication offset (more data from the master processed).
# Slaves will try to get their rank by offset, and apply to the start
# of the failover a delay proportional to their rank.
#
# 2) Every single slave computes the time of the last interaction with
# its master. This can be the last ping or command received (if the master
# is still in the "connected" state), or the time that elapsed since the
# disconnection with the master (if the replication link is currently down).
# If the last interaction is too old, the slave will not try to failover
# at all.
#
# The point "2" can be tuned by user. Specifically a slave will not perform
# the failover if, since the last interaction with the master, the time
# elapsed is greater than:
#
# (node-timeout * slave-validity-factor) + repl-ping-slave-period
#
# So for example if node-timeout is 30 seconds, and the slave-validity-factor
# is 10, and assuming a default repl-ping-slave-period of 10 seconds, the
# slave will not try to failover if it was not able to talk with the master
# for longer than 310 seconds.
#
# A large slave-validity-factor may allow slaves with too old data to failover
# a master, while a too small value may prevent the cluster from being able to
# elect a slave at all.
#
# For maximum availability, it is possible to set the slave-validity-factor
# to a value of 0, which means, that slaves will always try to failover the
# master regardless of the last time they interacted with the master.
# (However they'll always try to apply a delay proportional to their
# offset rank).
#
# Zero is the only value able to guarantee that when all the partitions heal
# the cluster will always be able to continue.
#
# cluster-slave-validity-factor 10

# Cluster slaves are able to migrate to orphaned masters, that are masters
# that are left without working slaves. This improves the cluster ability
# to resist to failures as otherwise an orphaned master can't be failed over
# in case of failure if it has no working slaves.
#
# Slaves migrate to orphaned masters only if there are still at least a
# given number of other working slaves for their old master. This number
# is the "migration barrier". A migration barrier of 1 means that a slave
# will migrate only if there is at least 1 other working slave for its master
# and so forth. It usually reflects the number of slaves you want for every
# master in your cluster.
#
# Default is 1 (slaves migrate only if their masters remain with at least
# one slave). To disable migration just set it to a very large value.
# A value of 0 can be set but is useful only for debugging and dangerous
# in production.
#
# cluster-migration-barrier 1

# By default Redis Cluster nodes stop accepting queries if they detect there
# is at least an hash slot uncovered (no available node is serving it).
# This way if the cluster is partially down (for example a range of hash slots
# are no longer covered) all the cluster becomes, eventually, unavailable.
# It automatically returns available as soon as all the slots are covered again.
#
# However sometimes you want the subset of the cluster which is working,
# to continue to accept queries for the part of the key space that is still
# covered. In order to do so, just set the cluster-require-full-coverage
# option to no.
#
# cluster-require-full-coverage yes

# In order to setup your cluster make sure to read the documentation
# available at http://redis.io web site.

################################## SLOW LOG ###################################

# The Redis Slow Log is a system to log queries that exceeded a specified
# execution time. The execution time does not include the I/O operations
# like talking with the client, sending the reply and so forth,
# but just the time needed to actually execute the command (this is the only
# stage of command execution where the thread is blocked and can not serve
# other requests in the meantime).
#
# You can configure the slow log with two parameters: one tells Redis
# what is the execution time, in microseconds, to exceed in order for the
# command to get logged, and the other parameter is the length of the
# slow log. When a new command is logged the oldest one is removed from the
# queue of logged commands.

# The following time is expressed in microseconds, so 1000000 is equivalent
# to one second. Note that a negative number disables the slow log, while
# a value of zero forces the logging of every command.
slowlog-log-slower-than 10000

# There is no limit to this length. Just be aware that it will consume memory.
# You can reclaim memory used by the slow log with SLOWLOG RESET.
slowlog-max-len 128

################################ LATENCY MONITOR ##############################

# The Redis latency monitoring subsystem samples different operations
# at runtime in order to collect data related to possible sources of
# latency of a Redis instance.
#
# Via the LATENCY command this information is available to the user that can
# print graphs and obtain reports.
#
# The system only logs operations that were performed in a time equal or
# greater than the amount of milliseconds specified via the
# latency-monitor-threshold configuration directive. When its value is set
# to zero, the latency monitor is turned off.
#
# By default latency monitoring is disabled since it is mostly not needed
# if you don't have latency issues, and collecting data has a performance
# impact, that while very small, can be measured under big load. Latency
# monitoring can easily be enabled at runtime using the command
# "CONFIG SET latency-monitor-threshold <milliseconds>" if needed.
latency-monitor-threshold 0

############################# EVENT NOTIFICATION ##############################

# Redis can notify Pub/Sub clients about events happening in the key space.
# This feature is documented at http://redis.io/topics/notifications
#
# For instance if keyspace events notification is enabled, and a client
# performs a DEL operation on key "foo" stored in the Database 0, two
# messages will be published via Pub/Sub:
#
# PUBLISH __keyspace@0__:foo del
# PUBLISH __keyevent@0__:del foo
#
# It is possible to select the events that Redis will notify among a set
# of classes. Every class is identified by a single character:
#
# K Keyspace events, published with __keyspace@<db>__ prefix.
# E Keyevent events, published with __keyevent@<db>__ prefix.
# g Generic commands (non-type specific) like DEL, EXPIRE, RENAME, ...
# $ String commands
# l List commands
# s Set commands
# h Hash commands
# z Sorted set commands
# x Expired events (events generated every time a key expires)
# e Evicted events (events generated when a key is evicted for maxmemory)
# A Alias for g$lshzxe, so that the "AKE" string means all the events.
#
# The "notify-keyspace-events" takes as argument a string that is composed
# of zero or multiple characters. The empty string means that notifications
# are disabled.
#
# Example: to enable list and generic events, from the point of view of the
# event name, use:
#
# notify-keyspace-events Elg
#
# Example 2: to get the stream of the expired keys subscribing to channel
# name __keyevent@0__:expired use:
#
# notify-keyspace-events Ex
#
# By default all notifications are disabled because most users don't need
# this feature and the feature has some overhead. Note that if you don't
# specify at least one of K or E, no events will be delivered.
notify-keyspace-events ""

############################### ADVANCED CONFIG ###############################

# Hashes are encoded using a memory efficient data structure when they have a
# small number of entries, and the biggest entry does not exceed a given
# threshold. These thresholds can be configured using the following directives.
hash-max-ziplist-entries 512
hash-max-ziplist-value 64

# Similarly to hashes, small lists are also encoded in a special way in order
# to save a lot of space. The special representation is only used when
# you are under the following limits:
list-max-ziplist-entries 512
list-max-ziplist-value 64

# Sets have a special encoding in just one case: when a set is composed
# of just strings that happen to be integers in radix 10 in the range
# of 64 bit signed integers.
# The following configuration setting sets the limit in the size of the
# set in order to use this special memory saving encoding.
set-max-intset-entries 512

# Similarly to hashes and lists, sorted sets are also specially encoded in
# order to save a lot of space. This encoding is only used when the length and
# elements of a sorted set are below the following limits:
zset-max-ziplist-entries 128
zset-max-ziplist-value 64

# HyperLogLog sparse representation bytes limit. The limit includes the
# 16 bytes header. When an HyperLogLog using the sparse representation crosses
# this limit, it is converted into the dense representation.
#
# A value greater than 16000 is totally useless, since at that point the
# dense representation is more memory efficient.
#
# The suggested value is ~ 3000 in order to have the benefits of
# the space efficient encoding without slowing down too much PFADD,
# which is O(N) with the sparse encoding. The value can be raised to
# ~ 10000 when CPU is not a concern, but space is, and the data set is
# composed of many HyperLogLogs with cardinality in the 0 - 15000 range.
hll-sparse-max-bytes 3000

# Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in
# order to help rehashing the main Redis hash table (the one mapping top-level
# keys to values). The hash table implementation Redis uses (see dict.c)
# performs a lazy rehashing: the more operation you run into a hash table
# that is rehashing, the more rehashing "steps" are performed, so if the
# server is idle the rehashing is never complete and some more memory is used
# by the hash table.
#
# The default is to use this millisecond 10 times every second in order to
# actively rehash the main dictionaries, freeing memory when possible.
#
# If unsure:
# use "activerehashing no" if you have hard latency requirements and it is
# not a good thing in your environment that Redis can reply from time to time
# to queries with 2 milliseconds delay.
#
# use "activerehashing yes" if you don't have such hard requirements but
# want to free memory asap when possible.
activerehashing yes

# The client output buffer limits can be used to force disconnection of clients
# that are not reading data from the server fast enough for some reason (a
# common reason is that a Pub/Sub client can't consume messages as fast as the
# publisher can produce them).
#
# The limit can be set differently for the three different classes of clients:
#
# normal -> normal clients including MONITOR clients
# slave -> slave clients
# pubsub -> clients subscribed to at least one pubsub channel or pattern
#
# The syntax of every client-output-buffer-limit directive is the following:
#
# client-output-buffer-limit <class> <hard limit> <soft limit> <soft seconds>
#
# A client is immediately disconnected once the hard limit is reached, or if
# the soft limit is reached and remains reached for the specified number of
# seconds (continuously).
# So for instance if the hard limit is 32 megabytes and the soft limit is
# 16 megabytes / 10 seconds, the client will get disconnected immediately
# if the size of the output buffers reach 32 megabytes, but will also get
# disconnected if the client reaches 16 megabytes and continuously overcomes
# the limit for 10 seconds.
#
# By default normal clients are not limited because they don't receive data
# without asking (in a push way), but just after a request, so only
# asynchronous clients may create a scenario where data is requested faster
# than it can read.
#
# Instead there is a default limit for pubsub and slave clients, since
# subscribers and slaves receive data in a push fashion.
#
# Both the hard or the soft limit can be disabled by setting them to zero.
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60

# Redis calls an internal function to perform many background tasks, like
# closing connections of clients in timeout, purging expired keys that are
# never requested, and so forth.
#
# Not all tasks are performed with the same frequency, but Redis checks for
# tasks to perform according to the specified "hz" value.
#
# By default "hz" is set to 10. Raising the value will use more CPU when
# Redis is idle, but at the same time will make Redis more responsive when
# there are many keys expiring at the same time, and timeouts may be
# handled with more precision.
#
# The range is between 1 and 500, however a value over 100 is usually not
# a good idea. Most users should use the default of 10 and raise this up to
# 100 only in environments where very low latency is required.
hz 10

# When a child rewrites the AOF file, if the following option is enabled
# the file will be fsync-ed every 32 MB of data generated. This is useful
# in order to commit the file to the disk more incrementally and avoid
# big latency spikes.
aof-rewrite-incremental-fsync yes

3. CAP + Base

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# 1. 传统的ACID(Atomicity:原子性,Consistency:一致性,Isolation:独立性,Duration:持久性)
# 原子性
# 一致性
# 独立性
# 持久性

# 2. CAP(Consistency:强一致性,Availability:可用性,Partition tolerence:分区容错性)


# 3. CAP的3进2
# CAP的核心理论是:一个分布式系统不可能同时很好的的满足一致性,可用性和分区容错性三个需求.最多只能同时较好的满足其中两个.
# 因此,根据CAP原理将NoSQL数据库分成了满足CA原则,满足CP原则和满足AP原则三大类:
# CA-单点集群,满足一致性,可用性的系统,通常在可扩展性上不太强大;Oracle/
# CP-满足一致性,分区容错性的系统,通常性能不是很高;Mongodb/HBASE/Redis
# AP-满足可用性,分区容错性的系统,通常对一致性要求较低. CouchDb******大多数网站架构的选择

# Base 就是为了解决关系数据库强制一致性引起的问题而引起的可用性降低而提出的解决方案.
# Base (Basically Available:基本可用,Soft state:软状态,Eventually consistency:最终一致),其思想是通过让系统放松对某一时刻数据一致性的要求来换取系统整体伸缩性和性能上的改观.原因在于再行系统往往由于地域分布和极高性能的要求,不可能采用分布式事务来完成这些指标,要想获得这些指标,必须采取两外一种方式完成,Base就是解决这个问题的办法.

4. Redis简介

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
# 1. 是什么:redis(Remote Dictionary Server 远程字典服务),使用C语言编写,遵守BSD协议,免费的高性能的(key/value)分布式内存数据库,基于内存运行并支持持久化的nosql数据库.

# 2. 作用
# 内存存储和持久化:redis支持异步将内存中的数据写到硬盘上,同时不影响继续服务;
# 取最新N个数据的操作,如,可以将最新的10条评论的ID放在Redis的list集合里;
# 模拟类似HTTPSession这种需要设定过期时间的功能;
# 发布,订阅消息系统
# 定时器,计数器

# 3. 特点
# redis支持数据的持久化,可以将内存中的数据写到硬盘上,重启的时候可以再次加载使用;
# redis不仅支持简单的key-value类型的数据,同时还提供list,set,zset,hash等数据结构的存储;
# redis支持数据的备份,即master-slave模式的数据备份

# 单进程模型来处理客户端的需求,对读写事件的响应是通过对epoll函数的包装来做到的.redis的实际处理速度完全依靠主进程的执行效率.
# epoll是linux内核为处理大批量文件描述符而做了改进的epoll,是林UN小下多路复用IO接口select/poll的增强版本,它能显著提高程序在大量并发连接中只有少量活跃的情况下的系统CPU利用率.
# 默认16个数据库,类似数组下标从0开始,初始默认使用零号数据库
# select命令切换数据库
# DBsize查看当前数据库的key的数量
# Flushdb:清空当前库;:清空所有库
# 同一密码管理,16个数据库都是同样密码,
# redis索引都是从零开始
# 默认端口:6379


# 4. http://redis.io/ http://www.redis.cn/

5. RDB与AOF数据持久化

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
# 一. RDB (Redis Database)
# 1.是什么: 在指定的时间间隔内将内存中的数据集快照写入磁盘,也就是行话讲的Snapshot快照,它恢复时是将快照文件直接读到内存里.Redis会单独创建(fork)一个子进程来进行持久化,会先将数据写入到一个临时文件中,待持久化过程都结束了,再用这个临时文件替换上次持久化好的文件。整个过程中,主进程是不进行任何IO操作的,这就确保了极高的性能如果需要进行大规模数据的恢复,且对于数据恢复的完整性不是非常敏感,那RDB方式要比AOF方式更加的高效。RDB的缺点是最后一次持久化后的数据可能丢失。
# Fork: fork的作用是复制一个与当前进程一样的进程。新进程的所有数据(变量、环境变量、程序计数器等)数值都和原进程一致,但是这是一个全新的进程,并作为原进程的子进程
# Rdb保存的是 dump.rdb文件
# 配置文件位置: 可以cp dump.rdb dump_new.rdb 冷拷贝后重新使用
# 如何触发RDB快照: Save:save时只管保存,其它不管,全部阻塞;BGSAVE:Redis会在后台异步进行快照操作,快照同时还可以响应客户端请求。可以通过lastsave命令获取最后一次成功执行快照的时间
# 如何恢复: 将备份文件 (dump.rdb) 移动到 redis 安装目录并启动服务即可 ;config get dir获取目录
# 优势: 适合大规模的数据恢复;对数据完整性和一致性要求不高
# 劣势: 在一定间隔时间做一次备份,所以如果redis意外down掉的话,就会丢失最后一次快照后的所有修改;fork的时候,内存中的数据被克隆了一份,大致2倍的膨胀性需要考虑
# 如何停止: 动态所有停止RDB保存规则的方法:redis-cli config set save ""
# 总结:



# 二. AOF (Append Only File)
# 1.是什么: 以日志的形式来记录每个写操作,将Redis执行过的所有写指令记录下来(读操作不记录),只许追加文件但不可以改写文件,redis启动之初会读取该文件重新构建数据,换言之,redis重启的话就根据日志文件的内容将写指令从前到后执行一次以完成数据的恢复工作
# 2. Aof保存的是appendonly.aof文件
# 3. AOF启动/修复/恢复: 修改默认的appendonly no,改为yes; 将有数据的aof文件复制一份保存到对应目录(config get dir);恢复:重启redis然后重新加载;修改默认的appendonly no,改为yes;备份被写坏的AOF文件;redis-check-aof --fix进行修复;恢复:重启redis然后重新加载
# 4.rewrite: rewrite是AOF采用文件追加方式,文件会越来越大为避免出现此种情况,新增了重写机制,当AOF文件的大小超过所设定的阈值时,Redis就会启动AOF文件的内容压缩,只保留可以恢复数据的最小指令集.可以使用命令bgrewriteaof | 其原理是:AOF文件持续增长而过大时,会fork出一条新进程来将文件重写(也是先写临时文件最后再rename),遍历新进程的内存中数据,每条记录有一条的Set语句。重写aof文件的操作,并没有读取旧的aof文件,而是将整个内存中的数据库内容用命令的方式重写了一个新的aof文件,这点和快照有点类似 | 触发原理: Redis会记录上次重写时的AOF大小,默认配置是当AOF文件大小是上次rewrite后大小的一倍且文件大于64M时触发
# 优势: 每修改同步:appendfsync always
# 同步持久化 每次发生数据变更会被立即记录到磁盘 性能较差但数据完整性比较好;每秒同步:appendfsync everysec 异步操作,每秒记录 如果一秒内宕机,有数据丢失; 不同步:appendfsync no 从不同步
# 劣势: 相同数据集的数据而言aof文件要远大于rdb文件,恢复速度慢于rdb;aof运行效率要慢于rdb,每秒同步策略效率较好,不同步效率和rdb相同



# 三. 总结
# 1. RDB持久化方式能够在指定的时间间隔能对你的数据进行快照存储
# 2. AOF持久化方式记录每次对服务器写的操作,当服务器重启的时候会重新执行这些命令来恢复原始的数据,AOF命令以redis协议追加保存每次写的操作到文件末尾.Redis还能对AOF文件进行后台重写,使得AOF文件的体积不至于过大
# 3. 只做缓存:如果你只希望你的数据在服务器运行的时候存在,你也可以不使用任何持久化方式.
# 4. 同时开启两种持久化方式: ①:在这种情况下,当redis重启的时候会优先载入AOF文件来恢复原始的数据,因为在通常情况下AOF文件保存的数据集要比RDB文件保存的数据集要完整. ②: RDB的数据不实时,同时使用两者时服务器重启也只会找AOF文件。那要不要只使用AOF呢?建议不要,因为RDB更适合用于备份数据库(AOF在不断变化不好备份),快速重启,而且不会有AOF可能潜在的bug,留着作为一个万一的手段。
# 6. 性能建议: ①: 因为RDB文件只用作后备用途,建议只在Slave上持久化RDB文件,而且只要15分钟备份一次就够了,只保留save 900 1这条规则。②: 如果Enalbe AOF,好处是在最恶劣情况下也只会丢失不超过两秒数据,启动脚本较简单只load自己的AOF文件就可以了。代价一是带来了持续的IO,二是AOF rewrite的最后将rewrite过程中产生的新数据写到新文件造成的阻塞几乎是不可避免的。只要硬盘许可,应该尽量减少AOF rewrite的频率,AOF重写的基础大小默认值64M太小了,可以设到5G以上。默认超过原大小100%大小时重写可以改到适当的数值。③: 如果不Enable AOF ,仅靠Master-Slave Replication 实现高可用性也可以。能省掉一大笔IO也减少了rewrite时带来的系统波动。代价是如果Master/Slave同时倒掉,会丢失十几分钟的数据,启动脚本也要比较两个Master/Slave中的RDB文件,载入较新的那个。新浪微博就选用了这种架构.

6. 常见参数配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
redis.conf 配置项说明如下:
# 1. Redis默认不是以守护进程的方式运行,可以通过该配置项修改,使用yes启用守护进程
  daemonize no
# 2. 当Redis以守护进程方式运行时,Redis默认会把pid写入/var/run/redis.pid文件,可以通过pidfile指定
  pidfile /var/run/redis.pid
# 3. 指定Redis监听端口,默认端口为6379,作者在自己的一篇博文中解释了为什么选用6379作为默认端口,因为6379在手机按键上MERZ对应的号码,而MERZ取自意大利歌女Alessia Merz的名字
  port 6379
# 4. 绑定的主机地址
  bind 127.0.0.1
# 5.当 客户端闲置多长时间后关闭连接,如果指定为0,表示关闭该功能
  timeout 300
# 6. 指定日志记录级别,Redis总共支持四个级别:debug、verbose、notice、warning,默认为verbose
  loglevel verbose
# 7. 日志记录方式,默认为标准输出,如果配置Redis为守护进程方式运行,而这里又配置为日志记录方式为标准输出,则日志将会发送给/dev/null
  logfile stdout
# 8. 设置数据库的数量,默认数据库为0,可以使用SELECT <dbid>命令在连接上指定数据库id
  databases 16
# 9. 指定在多长时间内,有多少次更新操作,就将数据同步到数据文件,可以多个条件配合
  save <seconds> <changes>
  Redis默认配置文件中提供了三个条件:
  save 900 1
  save 300 10
  save 60 10000
  分别表示900秒(15分钟)内有1个更改,300秒(5分钟)内有10个更改以及60秒内有10000个更改。
 
# 10. 指定存储至本地数据库时是否压缩数据,默认为yes,Redis采用LZF压缩,如果为了节省CPU时间,可以关闭该选项,但会导致数据库文件变的巨大
  rdbcompression yes
# 11. 指定本地数据库文件名,默认值为dump.rdb
  dbfilename dump.rdb
# 12. 指定本地数据库存放目录
  dir ./
# 13. 设置当本机为slav服务时,设置master服务的IP地址及端口,在Redis启动时,它会自动从master进行数据同步
  slaveof <masterip> <masterport>
# 14. 当master服务设置了密码保护时,slav服务连接master的密码
  masterauth <master-password>
# 15. 设置Redis连接密码,如果配置了连接密码,客户端在连接Redis时需要通过AUTH <password>命令提供密码,默认关闭
  requirepass foobared
# 16. 设置同一时间最大客户端连接数,默认无限制,Redis可以同时打开的客户端连接数为Redis进程可以打开的最大文件描述符数,如果设置 maxclients 0,表示不作限制。当客户端连接数到达限制时,Redis会关闭新的连接并向客户端返回max number of clients reached错误信息
  maxclients 128
# 17. 指定Redis最大内存限制,Redis在启动时会把数据加载到内存中,达到最大内存后,Redis会先尝试清除已到期或即将到期的Key,当此方法处理 后,仍然到达最大内存设置,将无法再进行写入操作,但仍然可以进行读取操作。Redis新的vm机制,会把Key存放内存,Value会存放在swap区
  maxmemory <bytes>
# 18. 指定是否在每次更新操作后进行日志记录,Redis在默认情况下是异步的把数据写入磁盘,如果不开启,可能会在断电时导致一段时间内的数据丢失。因为 redis本身同步数据文件是按上面save条件来同步的,所以有的数据会在一段时间内只存在于内存中。默认为no
  appendonly no
# 19. 指定更新日志文件名,默认为appendonly.aof
   appendfilename appendonly.aof
# 20. 指定更新日志条件,共有3个可选值: 
  no:表示等操作系统进行数据缓存同步到磁盘(快) 
  always:表示每次更新操作后手动调用fsync()将数据写到磁盘(慢,安全) 
  everysec:表示每秒同步一次(折衷,默认值)
  appendfsync everysec
 
# 21. 指定是否启用虚拟内存机制,默认值为no,简单的介绍一下,VM机制将数据分页存放,由Redis将访问量较少的页即冷数据swap到磁盘上,访问多的页面由磁盘自动换出到内存中(在后面的文章我会仔细分析Redis的VM机制)
   vm-enabled no
# 22. 虚拟内存文件路径,默认值为/tmp/redis.swap,不可多个Redis实例共享
   vm-swap-file /tmp/redis.swap
# 23. 将所有大于vm-max-memory的数据存入虚拟内存,无论vm-max-memory设置多小,所有索引数据都是内存存储的(Redis的索引数据 就是keys),也就是说,当vm-max-memory设置为0的时候,其实是所有value都存在于磁盘。默认值为0
   vm-max-memory 0
# 24. Redis swap文件分成了很多的page,一个对象可以保存在多个page上面,但一个page上不能被多个对象共享,vm-page-size是要根据存储的 数据大小来设定的,作者建议如果存储很多小对象,page大小最好设置为32或者64bytes;如果存储很大大对象,则可以使用更大的page,如果不 确定,就使用默认值
   vm-page-size 32
# 25. 设置swap文件中的page数量,由于页表(一种表示页面空闲或使用的bitmap)是在放在内存中的,,在磁盘上每8个pages将消耗1byte的内存。
   vm-pages 134217728
# 26. 设置访问swap文件的线程数,最好不要超过机器的核数,如果设置为0,那么所有对swap文件的操作都是串行的,可能会造成比较长时间的延迟。默认值为4
   vm-max-threads 4
# 27. 设置在向客户端应答时,是否把较小的包合并为一个包发送,默认为开启
  glueoutputbuf yes
# 28. 指定在超过一定的数量或者最大的元素超过某一临界值时,采用一种特殊的哈希算法
  hash-max-zipmap-entries 64
  hash-max-zipmap-value 512
# 29. 指定是否激活重置哈希,默认为开启(后面在介绍Redis的哈希算法时具体介绍)
  activerehashing yes
# 30. 指定包含其它的配置文件,可以在同一主机上多个Redis实例之间使用同一份配置文件,而同时各个实例又拥有自己的特定配置文件
  include /path/to/local.conf

7. Redis事务

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
# 1. Redis事务是指:可以一次执行多个命令,本质是一组命令的集合。一个事务中的所有命令都会序列化,按顺序地串行化执行而不会被其它命令插入,不许加塞.
# 2. 一个队列中,一次性、顺序性、排他性的执行一系列命令
# 3. 常用命令:
# DISCARD:取消事务,放弃执行事务块内的所有命令;
# EXEC:执行所有事务块中的命令;
# MULTI:标记一个事务块的开始;
# UNWATCH:取消watch命令对所有key的监控
# WATCH:监控一个(多个)key,如果在事务执行之前这个(这些)key被其他命令所改动,那么事务将被打断.
# ① 正常执行: .....
# ② 放弃事务: DISCARD
# ③ 全体连坐: 一错全错 未加入queue
# ④ 冤头债主: 执行后 只报错误的,正确的正常执行
****# ⑤ watch监控:
# 乐观锁: 乐观锁(Optimistic Lock), 顾名思义,就是很乐观,每次去拿数据的时候都认为别人不会修改,所以不会上锁,但是在更新的时候会判断一下在此期间别人有没有去更新这个数据,可以使用版本号等机制。乐观锁适用于多读的应用类型,这样可以提高吞吐量,
# 悲观锁: 悲观锁(Pessimistic Lock), 顾名思义,就是很悲观,每次去拿数据的时候都认为别人会修改,所以每次在拿数据的时候都会上锁,这样别人想拿这个数据就会block直到它拿到锁。传统的关系型数据库里边就用到了很多这种锁机制,比如行锁,表锁等,读锁,写锁等,都是在做操作之前先上锁
# CAS(Check And Set)
# 一旦执行了exec之前加的监控锁都会被取消掉了
# Watch指令,类似乐观锁,事务提交时,如果Key的值已被别的客户端改变,比如某个list已被别的客户端push/pop过了,整个事务队列都不会被执行
# 通过WATCH命令在事务执行之前监控了多个Keys,倘若在WATCH之后有任何Key的值发生了变化,EXEC命令执行的事务都将被放弃,同时返回Nullmulti-bulk应答以通知调用者事务执行失败

# 事务的三阶段:
# 开启:以MULTI开始一个事务;
# 入队:将多个命令入队到事务中,接到这些命令并不会立即执行,而是放到等待执行的事务队列里面
# 执行:由EXEC命令触发事务

# 事务的三特性:
# 单独的隔离操作:事务中的所有命令都会序列化、按顺序地执行。事务在执行的过程中,不会被其他客户端发送来的命令请求所打断。
# 没有隔离级别的概念:队列中的命令没有提交之前都不会实际的被执行,因为事务提交前任何指令都不会被实际执行,也就不存在”事务内的查询要看到事务里的更新,在事务外查询不能看到”这个让人万分头痛的问题
# 不保证原子性:redis同一个事务中如果有一条命令执行失败,其后的命令仍然会被执行,没有回滚

8. Redis的发布/订阅

1
2
3
4
5
6
7
8
# 进程间的一种消息通信模式:发送者(pub)发送消息,订阅者(sub)接收消息。


# case: 先订阅后发布后才能收到消息,
# 1 可以一次性订阅多个,SUBSCRIBE c1 c2 c3
# 2 消息发布,PUBLISH c2 hello-redis
# 3 订阅多个,通配符*, PSUBSCRIBE new*
# 4 收取消息, PUBLISH new1 redis2015

9. Redis的复制(master/slave)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
# 主从复制,主机数据更新后根据配置和策略,自动同步到备机的master/slaver机制,Master以写为主,Slave以读为主
# 作用:读写分离;容灾恢复
# 配置:
# ① 配从(库)不配主(库)
# ② 从库配置:slaveof 主库IP 主库端口 (每次与master断开之后,都需要重新连接,除非配置进redis.conf文件 )
# ③ 修改配置文件细节操作
# info Replication 命令
# 常用三种(不写进配置文件情况):
# ① 一主二仆:主机宕机,从机数据正常,角色不变(slave);主机恢复,一切照旧;从机宕机,另一从机正常;从机恢复,升级为主机,数据丢失(若要恢复 重新配置);
# ② 薪火相传:
# ③ 反客为主:

# 复制原理:
# ① slave启动成功连接到master后会发送一个sync命令
# ② Master接到命令启动后台的存盘进程,同时收集所有接收到的用于修改数据集命令,在后台进程执行完毕之后,master将传送整个数据文件到slave,以完成一次完全同步
# ③ 全量复制:而slave服务在接收到数据库文件数据后,将其存盘并加载到内存中。
# ④ 增量复制:Master继续将新的所有收集到的修改命令依次传给slave,完成同步
# ⑤ 但是只要是重新连接master,一次完全同步(全量复制)将被自动执行

# 哨兵模式: 反客为主的自动版,能够后台监控主机是否故障,如果故障了根据投票数自动将从库转换为主库



# 缺点: 复制延时

Nginx---高性能HTTP和反向代理服务器

1. 编译与安装

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
官网 : http://nginx.org
下载 : wget http://nginx.org/download/nginx-1.15.12.tar.gz
解压 : tar zxvf nginx-1.15.12.tar.gz
进入目录: cd nginx-1.15.12
./configure --prefix=/usr/local/nginx (若出现错误安装: yum install pcre pcre-devel ) --add-module=../nginx-rtmp-module-1.2.1 (前提是先下载nginx-rtmp-module作为直播模块 https://github.com/arut/nginx-rtmp-module wget https://codeload.github.com/arut/nginx-rtmp-module/tar.gz/v1.2.1
tar -zxvf v1.2.1
//或者
wget https://github.com/arut/nginx-rtmp-module/archive/v1.2.1.tar.gz
tar -zxvf v1.2.1.tar.gz)
编译安装: make && make install
nginx目录介绍:
...conf 配置文件
...hmtl 网页目录
...logs 日志文件
...sbin 主要二进制程序
启动Nginx: ./sbin/nginx (80端口不能被占用)

2. Nginx整合PHP

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
nginx+PHP编译
注意: ./configure --prefix=/usr/local/fastphp7.0 \
--with-config-file-path=/usr/local/fastphp7.0/etc \
--with-mysql=mysqlnd \
--enable-mysqlnd \
--with-gd \
--enable-gd-native-ttf \
--enable-gd-jis-conv \
--enable-fpm #作为独立进程
--with-apxs2=/usr/local/httpd/bin/apxs #作为Apache的一个模块
--with-config-file-path=/usr/local/php/etc
--with-mysqli --with-pdo-mysql
--with-iconv-dir
--with-freetype-dir
--with-jpeg-dir
--with-png-dir
--with-zlib
--with-libxml-dir
--enable-simplexml
--enable-xml
--disable-rpath
--enable-bcmath --enable-soap --enable-zip --with-curl --enable-fpm --with-fpm-user=www --with-fpm-group=www --enable-mbstring --enable-sockets --with-gd --with-openssl --with-mhash --enable-opcache --disable-fileinfo
而nginx则是把http请求变量(如get,user_agent)转发给PHP进程,PHP独立进程,与nginx进行通讯,称为fastcgi运行方式.
location ~ \.php$ {
root html;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
include fastcgi_params;
}

3. 信号控制

1
https://www.nginx.com/resources/wiki/start/topics/tutorials/commandline/
1
2
3
4
5
6
7
nginx的信号控制:  kill -信号选项 主进程pid(`cat logs/nginx.pid`)
1. TERM,INT 快速杀死
2. QUIT 优雅的关闭进程,即等请求结束再关闭进程
3. HUP 改变配置文件,平滑的重读配置文件
4. USR1 重读日志,在日志按月/日分割时有用
5. USR2 平滑的升级
6. WINCH 优雅的关闭旧的进程

4. 虚拟主机配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
<nginx.conf>

#user nobody;
worker_processes 1; //有一个工作的子进程,可自行修改但太大无益,一般为CPU数*核数

#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;

#pid logs/nginx.pid;


events {
//一般配置nginx链接的特性
worker_connections 1024; 一个子进程最大允许连接1024个连接
}


http {
include mime.types;
default_type application/octet-stream;

#log_format main '$remote_addr - $remote_user [$time_local] "$request" '
# '$status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';

#access_log logs/access.log main;

sendfile on;
#tcp_nopush on;

#keepalive_timeout 0;
keepalive_timeout 65;

#gzip on;

server {
listen 80;
server_name localhost;

#charset koi8-r;

#access_log logs/host.access.log main;

location / {
root html;
index index.html index.htm;
}

#error_page 404 /404.html;

# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}

# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}

# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
#location ~ \.php$ {
# root html;
# fastcgi_pass 127.0.0.1:9000;
# fastcgi_index index.php;
# fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
# fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
# include fastcgi_params;
#}

# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
}


# another virtual host using mix of IP-, name-, and port-based configuration
#
#server {
# listen 8000;
# listen somename:8080;
# server_name somename alias another.alias;

# location / {
# root html;
# index index.html index.htm;
# }
#}


# HTTPS server
#
#server {
# listen 443 ssl;
# server_name localhost;

# ssl_certificate cert.pem;
# ssl_certificate_key cert.key;

# ssl_session_cache shared:SSL:1m;
# ssl_session_timeout 5m;

# ssl_ciphers HIGH:!aNULL:!MD5;
# ssl_prefer_server_ciphers on;

# location / {
# root html;
# index index.html index.htm;
# }
#}

}
1
2
3
4
5
6
7
8
9
10
1. 基于域名的虚拟主机
server{
listen 80;
server_name: z.com;

location / {
root z.com; //根目录可相对(nginx根目录)
index index.html;
}
}
1
2
3
4
5
6
7
8
9
10
2. 基于端口的虚拟主机
server {
lsiten 2022;
server_name z.com;

location / {
root /var/www/html;
index index.html;
}
}
1
2
3
4
5
6
7
8
9
10
3. 基于IP的虚拟主机
server {
listen 80 ;
server_name 192.168.31.95;

location / {
root html/ip; //ip目录
index index.html
}
}

5. 日志管理

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
 http://nginx.org/en/docs/http/ngx_http_log_module.html
nginx server段:
#access_log logs/access.log main;
说明该server访问的日志文件是access.log,使用的格式是 `main`格式,
main格式:
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
远程IP- 远程用户/用户时间 请求方法(如GET/POST) 请求体body长度 referer来源信息
http-user-agent 用户代理/蜘蛛 ,被转发的请求的原始IP
http_x_forwarded_for:在经过代理时,代理把你的本来IP加在此头信息中,传输你的原始IP


除main格式外,可自定义其他格式.nginx允许对不同的server做不同的Log:
server{
listen 80;
server_name: z.com;

location / {
root z.com; //根目录可相对(nginx根目录)
index index.html;
}
access_log logs/z.com.access.log main;
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
定时任务+日志切割  以z.com为例:
server{
listen 80;
server_name: z.com;

location / {
root z.com; //根目录可相对(nginx根目录)
index index.html;
}
access_log logs/z.com.access.log main;
}

shell脚本 log.sh
#!/bin/bash
LOGPATH=/usr/local/nginx/logs/z.com.access.log
BASEPATH=/data/$(date -d yeaterday +%Y%m)
mkdir -p $BASEPATH/
bak = $BASEPATH/$(date -d yesterday +%d%H%M).zcom.access.log
mv $LOGPATH $bak
touch $LOGPATH
kill -USR1 `cat /usr/local/nginx/logs/nginx.pid`

6.location匹配

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
语法: location [=|~|~*|^~] patt {

}
分为三中类型:
location = patt {} [精准匹配]
location patt{} [一般匹配]
location ~ patt{} [正则匹配]
发挥作用的方式:
首先看有没有精准匹配,如果有,则停止匹配过程.
location = patt {
config A
}
如果 $uri == patt,匹配成功,使用configA
location = / {
root /var/www/html/;
index index.htm index.html;
}

location / {
root /usr/local/nginx/html;
index index.html index.htm;
}
如果访问  http://xxx.com/
1: 精准匹配中 ”/” ,得到index页为  index.htm
2: 再次访问 /index.htm , 此次内部转跳uri已经是”/index.htm” , 根目录为/usr/local/nginx/html
3: 最终结果,访问了 /usr/local/nginx/html/index.htm

正则匹配
location / {
root /usr/local/nginx/html;
index index.html index.htm;
}

location ~ image {
root /var/www/image;
index index.html;
}
如果访问 http://xx.com/image/logo.png
此时, “/” 与”/image/logo.png” 匹配
同时,”image”正则 与”image/logo.png”也能匹配,谁发挥作用?
正则表达式将会发挥作用 图片真正会访问 /var/www/image/logo.png

location / {
root /usr/local/nginx/html;
index index.html index.htm;
}

location /foo {
root /var/www/html;
index index.html;
}
访问 http://xxx.com/foo
对于uri “/foo”, 两个location的patt,都能匹配他们
即 ‘/’能从左前缀匹配 ‘/foo’, ‘/foo’也能左前缀匹配’/foo’,
此时, 真正访问 /var/www/html/index.html
原因:’/foo’匹配的更长,因此使用之
1
2
3
4
5
6
location的命中过程:
1.先精准命中,如果命中,立即返回结果并结束解析过程;
2.判断普通命中,如果有多个命中,记录寄来'最长'的命中结果(记录但不结束);
3.继续判断正则表达式的解析结果,按照配置的正则表达式顺序为准,由上到下开始匹配,一旦成功匹配1个,立即返回结果,并结束解析过程.
普通命中,顺序不影响命中结果,因为命中结果是按照命中的长短来确定的;
正则表达式命中,顺序影响命中结果,因为命中结果是从前往后命中的.

7. Rewrite 重写

1
2
3
4
5
6
7
8
9
10
11
12
13
14
常用命令:
if 空格 (条件) {
重写模式
}
设定条件,再进行重写
条件语法:
1: “=”来判断相等, 用于字符串比较
2: “~” 用正则来匹配(此处的正则区分大小写)
~* 不区分大小写的正则
3: -f -d -e来判断是否为文件,为目录,是否存在.
set #设置变量
return #返回状态码
break #跳出rewrite
rewrite #重写
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
 示例1:
location / {
if ($remote_addr = 192.168.31.95 ){
return 403;
}
root /usr/local/nginx/html;
index index.html index.htm;
}
示例2:
location / {
if ($http_user_agent ~ MSIE) {
rewrite ^.*$ /ie.htm;
break; #(不break会循环重定向)
}
root /usr/local/nginx/html;
index index.html index.htm;
}
示例3:
location / {
if (!-e $document_root$fastcgi_script_name) {
rewrite ^.*$ /404.html break;
}
root /usr/local/nginx/html;
index index.html index.htm;
}
以 xx.com/dsafsd.html这个不存在页面为例,
日志中显示的访问路径,依然是GET /dsafsd.html HTTP/1.1
提示: 服务器内部的rewrite和302跳转不一样.
跳转的话URL都变了,变成重新http请求404.html, 而内部rewrite, 上下文没变,
就是说 fastcgi_script_name 仍然是 dsafsd.html,因此会循环重定向.

set 是设置变量用的, 可以用来达到多条件判断时作标志用
达到apache下的 rewrite_condition的效果
location / {
if ($http_user_agent ~* msie){
set $isie 1;
}
if ($fastcgi_script_name = ie.html) {
set $isie 0;
}
if ($isie 1) {
rewrite ^.*$ ie.html;
}
root /usr/local/nginx/html;
index index.html index.htm;
}

8. url重写

1
2
3
4
5
6
7
8
9
location /ecshop {
index index.php;
rewrite good-(\d{1,9})\.html /ecshop/goods.php?id=$1;
rewrite article-([\d]+)\.html$ /ecshop/article.php?id=$1;
rewrite category-(\d+)-b(\d+)\.html /ecshop/category.php?id=$1&brand=$2;
rewrite category-(\d+)-b(\d+)-min(\d+)-max(\d+)-attr([\d\.]+)\.html /ecshop/category.php?id=$1&brand=$2&price_min=$3&price_max=$4&filter_attr=$5;
rewrite category-(\d+)-b(\d+)-min(\d+)-max(\d+)-attr([\d+\.])-(\d+)-([^-]+)-([^-]+)\.html /ecshop/category.php?id=$1&brand=$2&price_min=$3&price_max=$4&filter_attr=$5&page=$6&sort=$7&order=$8;
注意:用url重写时, 正则里如果有”{}”,正则要用双引号包起来
}

9.Gzip压缩

1
2
3
4
5
原理: 
浏览器---请求----> 声明可以接受 gzip压缩 或 deflate压缩 或compress 或 sdch压缩
从http协议的角度看--请求头 声明 acceopt-encoding: gzip deflate sdch (是指压缩算法,其中sdch是google倡导的一种压缩方式,目前支持的服务器尚不多)
服务器-->回应---把内容用gzip方式压缩---->发给浏览器
浏览<-----解码gzip-----接收gzip压缩内容----
1
2
3
4
5
6
7
8
9
10
gzip配置的常用参数:
gzip on|off; #是否开启gzip
gzip_buffers 32 4K| 16 8K #缓冲(压缩在内存中缓冲几块? 每块多大?)
gzip_comp_level [1-9] #推荐6 压缩级别(级别越高,压的越小,越浪费CPU计算资源)
gzip_disable #正则匹配UA 什么样的Uri不进行gzip
gzip_min_length 200 # 开始压缩的最小长度字节(再小就不要压缩了,意义不在)
gzip_http_version 1.0|1.1 # 开始压缩的http协议版本(可以不设置,目前几乎全是1.1协议)
gzip_proxied # 设置请求者代理服务器,该如何缓存内容
gzip_types text/plain application/xml # 对哪些类型的文件用压缩 如txt,xml,html ,css
gzip_vary on|off # 是否传输gzip压缩标志
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
示例:
#gzip on;

server {
listen 80;
server_name localhost;

gzip on;
gzip_buffers 32 4K;
gzip_comp_level 6;
gzip_min_length 4000;
gzip_types text/css text/xml application/javascript; #(conf/mime.types)

#charset koi8-r;

#access_log logs/host.access.log main;

location / {
root html;
index index.php index.html index.htm;
}

#error_page 404 /404.html;

# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}

# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}

# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
location ~ \.php$ {
root html;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
include fastcgi_params;
}

# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
}
1
2
3
注意: 
图片/mp3这样的二进制文件,不必压缩
因为压缩率比较小, 比如100->80字节,而且压缩也是耗费CPU资源的

10.Expires缓存

1
2
3
4
5
6
7
8
9
nginx中设置过期时间:在location或if段里
格式: expires 30s;
expires 30m;
expires 2h;
expires 30d;
location ~* \.(jpg|jpeg|gif|png) {
#root html/image;
expires 1d;
}
1
2
3
4
缓存原理:
服务器响应文件内容时,同时响应etag标签(内容的签名,内容一变,他也变), 和 last_modified_since 2个标签值
浏览器下次去请求时,头信息发送这两个标签, 服务器检测文件有没有发生变化,若无变化,直接头信息返回 etag,last_modified_since浏览器知道内容无改变,于是直接调用本地缓存.
这个过程,也请求了服务器,但是传输的内容极少.对于变化周期较短的,如静态html,js,css,比较适于用这个方式

11. 反向代理实现Nginx+Apache动静分离

1
2
3
4
5
6
7
8
9
10
11
12
支持两个用法 : proxy, upstream,分别用来做反向代理和负载均衡
1.apache 8080端(假设ip是192.168.1.200:8080),处理PHP文件
2. nginx配置代理: 处理js,css文件
location ~ \.php$ {
proxy_set_header X-Forwarded-For $remote_addr; 带着客户端的IP
proxy_pass 127.0.0.1:8080; #单台服务器,也可指向多台服务器
# root html;
# fastcgi_pass http://192.168.1.200:8080;
# fastcgi_index index.php;
# fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
# include fastcgi_params;
}

12. 负载均衡

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
具体实现:反向代理后端如果有多台服务器,把多台服务器用 upstream指定绑定在一起并起个组名,然后proxy_pass指向该组即可.
假设有两台服务器(专门存放图片信息): 192.168.1.200
upstream imageserver {
server 192.168.1.200:81 weight=1 max_fails=2 fail_timeout=3;
server 192.168.1.200:82 weight=1 max_fails=2 fail_timeout=3;
}
server {
listen 81;
server_name localhost;
root html;
access_log logs/81-access.log main;
}
server {
listen 82;
server_name localhost;
root html;
access_log logs/82-access.log main;
}

location ~* \.(jpg|jpeg|gif|png)$ {
proxy_set_header X-Forwarded-For $remote_addr; 带着客户端的IP
proxy_pass http://imageserver;
}
默认的均衡的算法就是针对后端服务器的顺序,逐个请求.
其他负载均衡算法,如一致性哈希,需要安装第三方模块 ngx_http_upstream_consistent_hash

13. Memcached

1
2
3
4
5
6
7
8
nginx连接:
location / {
set $memcached_key "$uri";
memcached_pass 127.0.0.1:11211;
error_page 404 /callback.php;
# root html;
# index index.html index.htm;
}
1
2
3
PHP连接:
$mem = new Memcached();
$mem->addServer('127.0.0.1',11211);
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
<?php
//user2345.html
//获取URI作为key
$uri = $_SERVER['REQUEST_URI];
//分析URI中的uid
$uid = substr($uri,5,strpos($uri,'.')-5);

//链接数据库,查询并写入memcache
$conn = mysql_connect('localhost','root','pwd');
$sql = 'use test';
mysql_query($sql,$conn);
$sql ='set charset utf8';
mysql_query($sql,$conn);
$sql = "select * from user where uid=$uid";
$res = mysql_query($sql,$conn);
$user = mysql_fetch_assoc($res);
if(empty($user)){
echo "no this user";
}else{
print_r($user);
//链接Memcache
$mem = new memcache();
$mem->connect('127.0.0.1',11211);
$mem->add($uri,$user,0,300);
$mem->close();
}

14. ngx_http_upstream_consistent_hash(哈希一致)

1
2
nginx编译第三方模块:  ./configure --prefix=/xxx/xxx --add_module=/path/ngx_module
编译安装: make && make install
1
2
3
4
5
6
7
8
9
10
11
12
13
14
安装后 Nginx.conf 
upsream memserver {
consistent_hash $request_uri;
server 127.0.0.1:11211;
server 127.0.0.1:11212;
server 127.0.0.1:11213;
}
location / {
set $memcached_key "$uri";
memcached_pass memserver;
error_page 404 /callback.php;
# root html;
# index index.html index.htm;
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
<?php
//user2345.html
//获取URI作为key
$uri = $_SERVER['REQUEST_URI];
//分析URI中的uid
$uid = substr($uri,5,strpos($uri,'.')-5);

//添加多台服务器(同nginx)
$mem = new memcache();
$mem->connect('127.0.0.1',11211);
$mem->connect('127.0.0.1',11212);
$mem->connect('127.0.0.1',11213);


//链接数据库,查询并写入memcache
$conn = mysql_connect('localhost','root','pwd');
$sql = 'use test';
mysql_query($sql,$conn);
$sql ='set charset utf8';
mysql_query($sql,$conn);
$sql = "select * from user where uid=$uid";
$res = mysql_query($sql,$conn);
$user = mysql_fetch_assoc($res);
if(empty($user)){
echo "no this user";
}else{
print_r($user);
//哈希一致策略存储
$mem->add($uri,$user,0,300);
$mem->close();
}
1
2
3
php.ini 配置

memcache.hash_strategy = consistent
1
2
注意:
upstream 做负载均衡时 使用 127.0.0.1 而不是 localhost!

15. 单机nginx压力测试

1
2
3
4
5
现4台服务器:
服务器A : 192.168.1.201
服务器B : 192.168.1.202 压力测试
服务器C : 192.168.1.203
服务器D : 192.168.1.204
1
2
3
4
5
对于请求量大的高性能网站,如何支撑?
1.尽可能要减少请求,对于开发人员,如:合并css, 背景图片, 减少mysql查询等;
2.对于运维,使用nginx的expires,利用浏览器缓存等,减少查询;
3.利用cdn来响应请求;
4.最后,不可避免的请求,则使用服务器集群+负载均衡来支撑.
1
2
3
4
5
6
7
8
9
nginx统计模块 : http_stub_status_module
./configure --prefix=/usr/local/nginx/ --add-module=/app/ngx_http_consistent_hash-master --with-http_stub_status_module

location /status {
stub_status on;
access_log off;
allow 192.168.1.100;
deny all;
}

高并发思路:

1
2
3
4
5
6
7
一 .Socket方面
系统层面:
1. 洪水攻击: 不做洪水抵御
2.最大连接数:somaxconn
3.加快TCP链接的回收
4.空的TCP是否允许回收利用 reuse
nginx层面: 每个子进程允许打开的链接 worker_connections
1
2
3
4
5
二.文件方面
系统层面:
ulimit -n 加大
nginx层面:
每个子进程允许打开的链接 worker_connections

具体配置:

1
2
3
4
5
6
7
nginx 配置 worker_connections
events {
worker_connections 10240;
}
worker_processes 1;
worker_rlimit_nofile 10000; #1个工作进程允许打开多少文件
keepalive_timeout 0; 高并发网站中,http连接快速关闭,加快TCP回收
1
2
3
4
5
6
7
8
系统层面: 1.socket优化  /proc/sys/net/core/somaxconn
修改: echo 50000 > /proc/sys/net/core/somaxconn
2.加快TCP回收 /proc/sys/net/ipv4/tcp_tw_recycle
修改: echo 1 > /proc/sys/net/ipv4/tcp_tw_recycle
3.空的TCP回收利用 /proc/sys/net/ipv4/tcp_tw_reuse
修改: echo 1 > /proc/sys/net/ipv4/tcp_tw_reuse
4.不做洪水抵御 /proc/sys/net/ipv4/tcp_syncookies
修改: echo 0 > /proc/sys/net/ipv4/tcp_syncookies
1
2
3
4
5
6
7
或脚本优化:
tcpupgrade.sh
#!/bin/bash
echo 50000 > /proc/sys/net/core/somaxconn
echo 1 > /proc/sys/net/ipv4/tcp_tw_recycle
echo 1 > /proc/sys/net/ipv4/tcp_tw_reuse
echo 0 > /proc/sys/net/ipv4/tcp_syncookies

16. nginx服务器集群

1
2
3
4
5
现4台服务器:
服务器A : 192.168.1.201 MySQL
服务器B : 192.168.1.202 nginx静态缓存 memcache集群
服务器C : 192.168.1.203 MySQL+PHP
服务器D : 192.168.1.204 Memcache
1
2
3
4
5
服务器C  192.168.1.203  运行PHP  四个端口  9001 | 9002 | 9003 | 9004
nginx来访问fpm时, fpm的进程要是不够用, 会生成子进程.生成子进程需要内核来调度,比较耗时,如果网站并发比较大, 我们可以用静态方式一次性生成若干子进程,保持在内存中.
php-fpm php-fpm.conf配置 php-fpm9001.conf php-fpm9002.conf php-fpm9003.conf php-fpm9004.conf
pm = static #让fpm进程始终保持,不要动态生成
pm.max_children = 16 #始终保持的子进程数量
1
2
3
4
5
6
7
8
启动php-fpm指定配置文件
start.sh 脚本批量运行
#!/bin/bash
/usr/local/php/sbin/php-fpm -y /usr/local/php/etc/php-pfm.conf
/usr/local/php/sbin/php-fpm -y /usr/local/php/etc/php-pfm9001.conf
/usr/local/php/sbin/php-fpm -y /usr/local/php/etc/php-pfm9002.conf
/usr/local/php/sbin/php-fpm -y /usr/local/php/etc/php-pfm9003.conf
/usr/local/php/sbin/php-fpm -y /usr/local/php/etc/php-pfm9004.conf
1
2


服务器C 192.168.1.203 /var/www目录下
php.ini 配置 哈希一致 memcache.hash_strategy = consistent

1
2


服务器C 192.168.1.203 /var/www目录下
callback.php
<?php
$mem = new Memcached();
$mem->addServer(‘192.168.1.204’,11211);
$mem->addServer(‘192.168.1.204’,11212);
$mem->addServer(‘192.168.1.204’,11213);
$mem->addServer(‘192.168.1.204’,11214);
$mem->addServer(‘192.168.1.204’,11215);
$mem->addServer(‘192.168.1.204’,11216);
$mem->addServer(‘192.168.1.204’,11217);
$mem->addServer(‘192.168.1.204’,11218);
$uri = $_SERVER[‘REQUEST_URI’];
if($uri ==’/index.html’){
$cont = “this is index.html”;
$mem->add($uri,$cont,false,300);
echo $cont;
} else if(substr($uri, 1,3) == ‘com’){
$comid = substr($uri, 4,-5);
echo “you want to see “.$comid.”company”;
$conn = mysql_connect(‘192.168.1.201’,’root’,’pwd’);
if(!$conn){
exit(‘mysql connect failed’);
}
$sql = “use big_data”;
mysql_query($sql,$conn);
$sql = “set names utf8”;
mysql_query($sql,$conn);
$sql = “select id,name,address from lx_com where id = “.$comid;
$res = mysql_query($sql,$conn);
$info = mysql_fetch_assoc($res);
if(!empty($info)){
echo “no this company”;
exit;
}
$cont = ‘’;
$cont = ‘

‘.$info[‘name’].’

‘;
$cont .= ‘

‘.$info[‘address’].’

‘;
$mem->add($uri,$cont,0,300);
echo $cont;
}
echo “ ~ ~ from mysql”;

1
2


服务器B 192.168.1.202 nginx.conf 配置
upstream memserver {
consistent_hash $request_uri;
server 192.168.1.204:11211;
server 192.168.1.204:11212;
server 192.168.1.204:11213;
server 192.168.1.204:11214;
server 192.168.1.204:11215;
server 192.168.1.204:11216;
server 192.168.1.204:11217;
server 192.168.1.204:11218;
}
upstream phpserver {
server 192.168.2.203:9000;
server 192.168.2.203:9001;
server 192.168.2.203:9002;
server 192.168.2.203:9003;
server 192.168.2.203:9004;
}
location / {
set memcached_key $request_uri;
memecahced_pass memserver;
error_page 404 = /callback.php #服务器C 192.168.1.203 /var/www目录下
#root html;
#index index.html index.htm;
}
location ~ .php$ {
root /var/www; #服务器C 192.168.1.203 /var/www目录下
fastcgi_pass phpserver;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48

### 17. 集群性能测试

```php
测试脚本: 前10万条为热门数据
客户端: 服务器A : 192.168.1.201 MySQL
请求: 192.168.1.202
<?php
$mem = new Memcached();
$mem->addServer('192.168.1.204',11211,true); # true:持久连接
$mem->addServer('192.168.1.204',11212,true);
$mem->addServer('192.168.1.204',11213,true);
$mem->addServer('192.168.1.204',11214,true);
$mem->addServer('192.168.1.204',11215,true);
$mem->addServer('192.168.1.204',11216,true);
$mem->addServer('192.168.1.204',11217,true);
$mem->addServer('192.168.1.204',11218,true);
$min = 15422432;
if(!mt_rand(0,100) <= 90){
$comid = $min + mt_rand(0,100000);
}else{
$comid = mt_rand(0,$min);
}
if( !($cont = $mem->get('com'.$comid) )){
$conn = mysql_connect('192.168.1.201','root','pwd');
$sql = "use big_data";
mysql_query($sql,$conn);
$sql = "set names utf8";
mysql_query($sql,$conn);
$sql = "use big_data";
mysql_query($sql,$conn);
$sql = "set names utf8";
mysql_query($sql,$conn);
$sql = "select id,name,address from lx_com where id = ".$comid;
$res = mysql_query($sql,$conn);
$info = mysql_fetch_assoc($res);
if(!empty($info)){
echo "no this company";
exit;
}
$cont = '';
$cont = '<h1>'.$info['name'].'</h1>';
$cont .= '<h2>'.$info['address'].'</h2>';
$mem->add('com'.$comid,$cont,0,300);
echo $cont;
}else{
echo $cont;
}

nginx 复习

18 安装

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
下载必要组件

nginx下载地址

http://nginx.org/en/download.html
pcre库下载地址,nginx需要

http://sourceforge.net/projects/pcre/files/pcre/
zlib下载地址,nginx需要

http://www.zlib.net/
openssl下载地址,nginx需要

https://github.com/openssl/openssl
在同级目录下,解压安装zlib、openssl、pcre

进入nginx目录,进行配置安装

./configure \
--prefix=/usr/local/nginx \
--with-http_ssl_module \
--with-http_flv_module \
--with-http_stub_status_module \
--with-http_gzip_static_module \
--with-pcre=../pcre-8.39 \
--with-zlib=../zlib-1.2.8 \
--with-openssl=../openssl-master
下面可直接复制粘贴

./configure --prefix=/usr/local/nginx --with-http_ssl_module --with-http_flv_module --with-http_stub_status_module --with-http_gzip_static_module --with-pcre=../pcre-8.39 --with-zlib=../zlib-1.2.8 --with-openssl=../openssl-master
编译安装

$ make && sudo make install
Nginx会被安装在/usr/local/nginx目录下(也可以使用参数--prefix=指定自己需要的位置), 安装成功后 /usr/local/nginx 目录下有四个子目录分别是:conf、html、logs、sbin 。 其中 Nginx 的配置文件存放于 conf/nginx.conf, bin文件是位于 sbin 目录下的 nginx 文件。 确保系统的 80 端口没被其他程序占用,运行 sbin/nginx 命令来启动 Nginx,

启动nginx

$sudo /usr/local/nginx/sbin/nginx
#netstat -ano|grep 80
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 关闭 (0.00/0/0)
unix 17 [ ] 数据报 10801 /dev/log
打开浏览器访问此机器的 IP,如果浏览器出现 Welcome to nginx! 则表示 Nginx 已经安装并运行成功

# 检查配置文件是否正确
# /usr/local/sbin/nginx -t
# 可以看到编译选项
# /usr/local/sbin/nginx -V
#重启Nginx
#sudo /usr/local/sbin/nginx -s reload
#关闭Nginx
#sudo /usr/local/sbin/nginx -s stop
#优雅停止服务
#sudo /usr/local/sbin/nginx -s quit
#kill -s SIGQUIT pid_master
#kill -s SIGWINCH pid_master

19. 配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
nginx.conf配置文件,基本就分为以下几块:

main
events {
....
}
http {
....
upstream myproject {
.....
}
server {
....
location {
....
}
}
server {
....
location {
....
}
}
....
}

nginx配置文件主要分为六个区域:

1. main(全局设置)
2. events(nginx工作模式)
3. http(http设置)
4. sever(主机设置)
5. location(URL匹配)
6. upstream(负载均衡服务器设置)



1. main模块
下面时一个main区域,它是一个全局的设置:

user nobody nobody;
worker_processes 2;
error_log /usr/local/var/log/nginx/error.log notice;
pid /usr/local/var/run/nginx/nginx.pid;
worker_rlimit_nofile 1024;
user 来指定Nginx Worker进程运行用户以及用户组,默认由nobody账号运行。

worker_processes来指定了Nginx要开启的子进程数。每个Nginx进程平均耗费10M~12M内存。根据经验,一般指定1个进程就足够了,如果是多核CPU,建议指定和CPU的数量一样的进程数即可。我这里写2,那么就会开启2个子进程,总共3个进程。

error_log用来定义全局错误日志文件。日志输出级别有debug、info、notice、warn、error、crit可供选择,其中,debug输出日志最为最详细,而crit输出日志最少。

pid用来指定进程id的存储文件位置。

worker_rlimit_nofile用于指定一个nginx进程可以打开的最多文件描述符数目,这里是65535,需要使用命令“ulimit -n 65535”来设置。


2.events模块
events模块来用指定nginx的工作模式和工作模式及连接数上限,一般是这样:

events {
use epoll; #Linux平台
worker_connections 1024;
}
use用来指定Nginx的工作模式。Nginx支持的工作模式有select、poll、kqueue、epoll、rtsig和/dev/poll。其中select和poll都是标准的工作模式,kqueue和epoll是高效的工作模式,不同的是epoll用在Linux平台上,而kqueue用在BSD系统中,对于Linux系统,epoll工作模式是首选。

worker_connections用于定义Nginx每个进程的最大连接数,即接收前端的最大请求数,默认是1024。最大客户端连接数由worker_processes和worker_connections决定,即Max_clients=worker_processes*worker_connections,在作为反向代理时,Max_clients变为:Max_clients = worker_processes * worker_connections/4。 进程的最大连接数受Linux系统进程的最大打开文件数限制,在执行操作系统命令“ulimit -n 65536”后worker_connections的设置才能生效。


3. http模块
http模块可以说是最核心的模块了,它负责HTTP服务器相关属性的配置,它里面的server和upstream子模块,至关重要,等到反向代理和负载均衡以及虚拟目录等会仔细说。

http{
include mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /usr/local/var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 10;
#gzip on;
upstream myproject {
.....
}
server {
....
}
}
下面详细介绍下这段代码中每个配置选项的含义。

include 来用设定文件的mime类型,类型在配置文件目录下的mime.type文件定义,来告诉nginx来识别文件类型。

default_type设定了默认的类型为二进制流,也就是当文件类型未定义时使用这种方式,例如在没有配置asp 的locate 环境时,Nginx是不予解析的,此时,用浏览器访问asp文件就会出现下载了。

log_format用于设置日志的格式,和记录哪些参数,这里设置为main,刚好用于access_log来纪录这种类型。

main的类型日志如下:也可以增删部分参数。

127.0.0.1 - - [21/Apr/2015:18:09:54 +0800] "GET /index.php HTTP/1.1" 200 87151 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2272.76 Safari/537.36"
access_log

用来纪录每次的访问日志的文件地址,后面的main是日志的格式样式,对应于log_format的main。

sendfile参数用于开启高效文件传输模式。将tcp_nopush和tcp_nodelay两个指令设置为on用于防止网络阻塞。

keepalive_timeout设置客户端连接保持活动的超时时间。在超过这个时间之后,服务器会关闭该连接。

4. server模块
sever 模块是http的子模块,它用来定一个虚拟主机,我们先讲最基本的配置,这些在后面再讲。

我们看一下一个简单的server 是如何做的?

server {
listen 8080;
server_name localhost 192.168.12.10 www.yangyi.com;
# 全局定义,如果都是这一个目录,这样定义最简单。
root /Users/yangyi/www;
index index.php index.html index.htm;
charset utf-8;
access_log usr/local/var/log/host.access.log main;
aerror_log usr/local/var/log/host.error.log error;
....
}
server标志定义虚拟主机开始。

listen用于指定虚拟主机的服务端口。

server_name用来指定IP地址或者域名,多个域名之间用空格分开。

root 表示在这整个server虚拟主机内,全部的root web根目录。注意要和locate {}下面定义的区分开来。

index 全局定义访问的默认首页地址。注意要和locate {}下面定义的区分开来。

charset用于设置网页的默认编码格式。

access_log用来指定此虚拟主机的访问日志存放路径,最后的main用于指定访问日志的输出格式。

5. location模块
location模块是nginx中用的最多的,也是最重要的模块了,什么负载均衡啊、反向代理啊、虚拟域名啊都与它相关。慢慢来讲:

location 根据它字面意思就知道是来定位的,定位URL,解析URL,所以,它也提供了强大的正则匹配功能,也支持条件判断匹配,用户可以通过location指令实现Nginx对动、静态网页进行过滤处理。像我们的php环境搭建就是用到了它。

我们先来看这个,设定默认首页和虚拟机目录。

location / {
root /Users/yangyi/www;
index index.php index.html index.htm;
}
location /表示匹配访问根目录。

root指令用于指定访问根目录时,虚拟主机的web目录,这个目录可以是相对路径(相对路径是相对于nginx的安装目录)。也可以是绝对路径。

#反向代理配置
location /itcast/ {
proxy_pass http://127.0.0.1:12345;
proxy_set_header X-real-ip $remote_addr;
proxy_set_header Host $http_host;
}


#采用uwsgi方式
location /python/ {
include uwsgi_params;
uwsgi_pass 127.0.0.1:33333;
}



#访问nginx本机目录的文件
location / {
root /home/itcast/xwp/itcast/;
index index.html index.htm;
}

location /static/ {
alias /var/static/;
}

6. upstream模块
upstream 模块负债负载均衡模块,通过一个简单的调度算法来实现客户端IP到后端服务器的负载均衡。我先学习怎么用,具体的使用实例以后再说。

upstream test.com{
ip_hash;
server 192.168.123.1:80;
server 192.168.123.2:80 down;
server 192.168.123.3:8080 max_fails=3 fail_timeout=20s;
server 192.168.123.4:8080;
}
在上面的例子中,通过upstream指令指定了一个负载均衡器的名称test.com。这个名称可以任意指定,在后面需要的地方直接调用即可。

里面是ip_hash这是其中的一种负载均衡调度算法。

Nginx的负载均衡模块目前支持4种调度算法:

① weight 轮询(默认)。每个请求按时间顺序逐一分配到不同的后端服务器,如果后端某台服务器宕机,故障系统被自动剔除,使用户访问不受影响。weight。指定轮询权值,weight值越大,分配到的访问机率越高,主要用于后端每个服务器性能不均的情况下。
② ip_hash。每个请求按访问IP的hash结果分配,这样来自同一个IP的访客固定访问一个后端服务器,有效解决了动态网页存在的session共享问题。
③ fair。比上面两个更加智能的负载均衡算法。此种算法可以依据页面大小和加载时间长短智能地进行负载均衡,也就是根据后端服务器的响应时间来分配请求,响应时间短的优先分配。Nginx本身是不支持fair的,如果需要使用这种调度算法,必须下载Nginx的upstream_fair模块。
④ url_hash。按访问url的hash结果来分配请求,使每个url定向到同一个后端服务器,可以进一步提高后端缓存服务器的效率。Nginx本身是不支持url_hash的,如果需要使用这种调度算法,必须安装Nginx 的hash软件包。
在HTTP Upstream模块中,可以通过server指令指定后端服务器的IP地址和端口,同时还可以设定每个后端服务器在负载均衡调度中的状态。常用的状态有:

down,表示当前的server暂时不参与负载均衡。

backup,预留的备份机器。当其他所有的非backup机器出现故障或者忙的时候,才会请求backup机器,因此这台机器的压力最轻。

max_fails,允许请求失败的次数,默认为1。当超过最大次数时,返回proxy_next_upstream 模块定义的错误。

fail_timeout,在经历了max_fails次失败后,暂停服务的时间。max_fails可以和fail_timeout一起使用。

注意 当负载调度算法为ip_hash时,后端服务器在负载均衡调度中的状态不能是weight和backup。

备注: nginx的worker_rlimit_nofile达到上限时,再有客户端链接报502错误. 用了log_format指令设置了日志格式之后,需要用access_log指令指定日志文件的存放路径.

20. 反向代理

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
正向代理,也就是传说中的代理,他的工作原理就像一个跳板,简单的说,我是一个用户,我访问不了某网站,但是我能访问一个代理服务器,这个代理服务器呢,他能访问那个我不能访问的网站,于是我先连上代理服务器,告诉他我需要那个无法访问网站的内容,代理服务器去取回来,然后返回给我。 从网站的角度,只在代理服务器来取内容的时候有一次记录,有时候并不知道是用户的请求,也隐藏了用户的资料,这取决于代理告不告诉网站。结论就是,正向代理 是一个位于客户端和原始服务器(origin server)之间的服务器,为了从原始服务器取得内容,客户端向代理发送一个请求并指定目标(原始服务器),然后代理向原始服务器转交请求并将获得的内容返回给客户端。客户端必须要进行一些特别的设置才能使用正向代理。

反向代理(Reverse Proxy)方式是指以代理服务器来接受internet上的连接请求,然后将请求转发给内部网络上的服务器,并将从服务器上得到的结果返回给internet上请求连接的客户端,此时代理服务器对外就表现为一个反向代理服务器。

从用途上来讲:

正向代理的典型用途是为在防火墙内的局域网客户端提供访问Internet的途径。
正向代理还可以使用缓冲特性减少网络使用率。反向代理的典型用途是将防火墙后面的服务器提供给Internet用户访问。
反向代理还可以为后端的多台服务器提供负载平衡,或为后端较慢的服务器提供缓冲服务。
另外,反向代理还可以启用高级URL策略和管理技术,从而使处于不同web服务器系统的web页面同时存在于同一个URL空间下。

反向代理服务器的基本配置:

1. proxy_pass
proxy_pass URL;
配置块 location if
此配置将当前请求代理到URL参数指定的服务器上,URL可以是主机名或者IP地址加PORT的形式
proxy_pass http://localhost:8000;
也可以结合负载均衡实用<负载均衡会说明这种情况>
也可以吧HTTP转换成HTTPS
proxy_pass http://192.168.0.1;
默认情况反向代理不转发请求中的Host头部,如果需呀设置抓发头部
则 proxy_set_header Host $host;

2. proxy_method
proxy_method method_name;
配置块 http server location
此配置项表示转发时的协议方法名:
proxy_method POST;
那么客户端发来的GET请求在转发时方法改为POST;

3. proxy_hide_header
proxy_hide_header header1;
配置块 http server location;
Nginx会将上游服务器的响应转发给客户端,但默认不转发HTTP头部字段(Date Server X-Pad X-Accel-* )
使用proxy_hide_header可以指定任意头部不能被转发
proxy_hide_header Cache-Control;
proxy_hide_header MicrosoftOfficeWebServer;

4. proxy_pass_header
proxy_pass_header header1;
配置块 http server location
功能与 proxy_hide_header相反,是设置哪些头部允许转发.
proxy_pass_header X-Accel-Redirect;

5. proxy_pass_request_body
proxy_pass_request_body off|on;
默认 on
配置块 http server location;
确定上游服务器是否向上游服务器转发HTTP包体

6. proxy_pass_request_header
proxy_pass_request_header on | off;
默认on
配置块 http server location
确定是否转发HTTP头部

7. proxy_redirect
proxy_redirect [default | off |redirect |replacement]
默认default
配置块 http server location
当上游服务响应时重定向或刷新(HTTP 301 302),proxy_redirect可以重设HTTP头部的location或refresh字段

proxy_redirect http://locahost:8000/two/ http://frontend/one/;
上游响应302,location是URI是http://locahost:8000/two/some/uri/
那是实际转发给客户端的是 http://frontend/one/some/uri/;
可以使用前面提到的ngx_http_core_module模块提供的变量
proxy_redirect http://locahost:8000/two/ http://$host:server_port/;
可以省略replacement参数的主机名部分,这时候用虚拟主机名填充
proxy_redirect http://locahost:8000/two/ /one/;

使用off参数的时候,将使location和refresh的字段维持不变
proxy_redirect off;

如果使用的 proxy_redirect default;
下面两种配置是等效的
location /{
proxy_pass http://upstream:port/two/;
proxy_redirect default;
}
location /{
proxy_pass http://upstream:port/two/;
proxy_redirect http://upstream:port/two/ /one/;
}

8. proxy_next_upstream
proxy_next_upstream [error |timeout |invalid_header |http_500 |http_502~504 |http_404 | off]
默认 proxy_next_upstream error timeout;
配置块 http server location

此配置表示上游一台服务器转发请求出现错误时,继续换一套服务器处理这个请求
其参数用来说明在那些情况下继续选择下一台上游服务器转发请求.
error 向上游发起连接 发送请求 读取响应时出错
timeout 发送请求或读取响应时出错
invalid_header 上游服务器发送的响应时不合法
http_500 上游响应500
http_502 上游响应502
http_503 上游响应503
http_504 上游响应504
http_404 上游响应404
off 关闭proxy_next_upstream功能 只要一出错就选择另外一台上游再次出发
Nginx反向代理模块中还提供很多配置,如设置连接的超时时间,临时文件如何存储,如何缓存上游服务器响应等功能.


可以通过阅读 ngx_http_proxy_module了解更多详细情况
#sudo vim /usr/local/nginx/conf/nginx.conf

server {
listen 80;
server_name localhost;
location / {
#保证代理机器能访问到 下面的机器并装有nginx 在主机号为100的机器上有响应网页
proxy_pass http://192.168.1.100;
root html;
index index.html index.htm;
}
}
sudo /usr/local/nginx/sbin/nginx -s reload
加一些判断条件 获取到 对方请求的主机 防止别人代理到自己的主机上

21. 负载均衡

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
负载均衡是由多台服务器以对称的方式组成一个服务器集合,每台服务器都具有等价的地位,都可以单独对外提供服务而无须其他服务器的辅助。通过某种负载分担技术,将外部发送来的请求按照事先设定分配算法分配到对称结构中的某一台服务器上,而接收到请求的服务器独立地回应客户的请求。

均衡负载能够平均分配客户请求到服务器列阵,籍此提供快速获取重要数据,解决大量并发访问服务问题。

1. upstream块
upstream name {...}
配置块 http
upstream块定义一个上游服务器的集群,便于反向代理中的proxy_pass使用

upstream mynet{
server www.wopai1.com;
server www.wopai2.com;
server www.wopai3.com;
}
server {
location /{
proxy_pass http://mynet;
}
}

2. server
server name [paramenters]
配置块upstream
server配置项指定了一台上游服务器的名字,可以是域名 IP地址端口 UNIX句柄
weight= number;设置向这台服务器转发的权重,默认为1
max_fails=number;该选项域fail_timeout配合使用
指在fail_timeout时间段内如果转发上游失败超过number次就认为当前的fail_timeout时间内
这台服务器不可用,max_fails默认为1 如果设置为0 表示不检查失败次数
fail_timeout=time; fail_timeout表示该时间内转发多少次失败后就认为上游不可用.默认10s
down 表示上游服务器永久下线,只能在ip_hash配置时才有效
backup 在ip_hash配置时无效.只有所有非备份机都失败,才向上游备份服务器转发请求.
upstream mynet{
server www.wopai1.com weight=5;
server www.wopai2.com:8081 max_fails=3 fail_timeout=300s;
server www.wopai2.com down;
}

3. ip_hash
配置块 upstream
希望来自某一个用户的请求始终落在固定的一台服务器上进行处理.
根据客户端的IP散列计算出一个key,将key按照upstream集群中的上游服务器进行取模,求得的值对应的主机接收转发请求.
ip_hash不可以与weight同时使用
如果upstream配置中有一台服务器暂时不可用,不能直接删除该配置,而应该使用down标识.
upstream mynet{
ip_hash;
server www.wowpai1.top;
server www.wowpai2.top;
server www.wowpai3.top down;
}

例子,服务器负载均衡基本配置,nginx中可以进行负载均衡的相关设置:
upstream my.net{ #my.net是自定义的命名 在server结构中引用即可

#代理服务器为 两台机器192.168.22.136 192.168.22.147做负载均衡操作
#两台机器上 可以跑apache负载功能更为强大的网页相关任务

#max_fails 表示尝试出错最大次数 即可认为该服务器 在fail_timeout时间内不可用
# server servername:port servername可以写主机名 或者点分式IP
server 192.168.22.136:80 max_fails=1 fail_timeout=300s;
server 192.168.22.147:80 max_fails=1 fail_timeout=300s;
}


server {
listen 80;
server_name localhost;
location / {
#upstream 块名
proxy_pass http://my.net;
root html;
index index.html index.htm;
}

高质量讲nginx的电子书: http://tengine.taobao.org/book/

nodejs-2019-study

1
2
官网:nodejs.org
中文网: nodejs.cn

1. REPL执行js代码

1
2
3
1. 打开cmd
2. 运行node
3. .exit退出或ctl+c

2. 写文件操作

1
2
3
4
5
6
7
8
var fs = require('fs'); //引入文件系统
fs.writeFile(__dirname +'/shi.txt',"床前明月光,疑似地上霜",'utf8',function(err){
if (err){
console.log(err);
}else{
console.log('写入成功');
}
})

3.读文件操作

1
2
3
4
5
6
7
8
9
var fs = require('fs'); //引入文件系统
fs.readFile(__dirname + '/test.txt','utf8',function(err,data){
if(err){
console.log("读取失败",err);
}else{
console.log("读取到内容",data);
}
})
注意:若不传字符编码,则会读取到buffer对象,以16进制显示

4. 文件目录路径

1
2
3
4
5
6
7
8
global:
__dirname

__filename

path.join() 拼接路径
var path = require('path');
path.join(__dirname,'shibin.log')

5. 异常处理

1
2
3
4
5
6
try{
console.log(a);
}catch(e){
cosole.log('代码出错了',e);
}
注意:文件操作不能使用try捕获.

6. http_server

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
var http = require('http'); //引入http模块
var server = http.createServer(); //创建http服务

//监听指定端口号
server.listen(8888,function(){
console.log('监听端口成功,通过访问:http://127.0.0.1:8888');
});
server.on('request',function(request,response){
//request事件的处理函数
// response.write('hello-world');
response.setHeader("content-type","text/html;charset=utf-8");//设置响应头信息
response.setHeader("name","assasin");//设置响应头信息
response.setHeader("age","16");//设置响应头信息
// response.writeHead(),也可设置响应头,可一次设置多条信息,包括状态码
response.writeHead(200,"ok",{
"content-type":"text/html;charset=utf-8",
"name":"assasin",
"age":18
})
//返回数据给浏览器,传字符串或buffer对象
response.write("hello-world");
response.end('响应结束'); //结束响应
})

注意:设置响应头应在write()之前设置.

7. response APIs

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
var http = require('http'); //引入http模块
var server = http.createServer(); //创建http服务
//监听指定端口号
server.listen(8888,function(){
console.log('监听端口成功,通过访问:http://localhost:8888 进行测试');
});
server.on('request',function(request,response){
//response 函数
//response.statusCode() 状态码
//response.statusMessage() 状态信息
response.statusCode = '502';
response.statusMessage = "Bad gateWay";

response.end('响应结束'); //结束响应
})

8. content-type

1
2
3
4
浏览器在接收到服务器响应的数据后,会根据响应头中的content-type来进行处理.
Content_Type:text/html
Content_Type:text/css
Content_Type:application/josn

9. 模拟服务器

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
var http = require('http');
var server = http.createServer();
var path = require('path');
var fs = require('fs');
var mime = require('mime');

server.on('request',function(req,res){
if(res.url == '/'){
res.url = '/index.html';
}
res.setHeader("content-type",mime.getType(req.url));
fs.readFile(path.join(__dirname,'view',req.url),function(err,data){
if(err){
res.statusCode = 404;
res.statusMessage = "Not Found";
res.end();
}else{
res.end(data);
}
})

})
server.listen(8888,function(){
console.log("http://localhost:8888")
})

10. request常用属性

1
2
3
4
5
request.url:请求地址中的路径与参数
request.headers: 请求头信息(对象)
request.rawHeader:请求头信息(数组)
request.httpVersion: http协议版本信息
request.method:http请求方式

11. Get请求参数的获取

1
2
http://localhost:8888?name=assasin&age=18
提取参数:{"name":"assasin","age":18}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
var http = require('http');
var server = http.createServer();
var querystring = require('querystring');
var url = require('url');
server.on('request',function(req,res){
//1.原生操作
// var params = {};
// if(req.url.indexOf("?") != -1){
// var querystring = req.url.split('?')[1]; //name=assasin&age=18
// querystring.split('&').forEach(function(v,i){
// var temp = v.split('=');
// params[temp[0]] = temp[1];
// })
// }
// console.log(params);

//2. 升级 使用querystring.parse()
// if(req.url.indexOf("?") != -1){
// var str = req.url.split('?')[1];
// var params = querystring.parse(str);
// console.log(params);
// }

//3. 使用url.parse模块处理 第二个参数传 TRUE
var urlObj = url.parse(req.url,true);
// urlObj.pathname:路径部分内容
//urlObj.query 参数部分内容 字符串 ---->参数对象
console.log(urlObj.query);
res.end('ok');
})
server.listen(8888,function(){
console.log("http://localhost:8888");
})

12. Post请求参数的获取

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
var http = require('http');
var server = http.createServer();
var querystring = require('querystring');
server.on('request',function(req,res){
//获取Post请求参数 请求体中
// 首先,定义一个数组,用于存放每次接收的数据
var bufferlist = [];
//1. 给req对象注册一个data事件,当有POST请求的数据发送至服务器时,就触发该事件
req.on('data',function(chunk){
//post请求的数据会分多次发送至服务器.每次发送给服务器都会触发data事件,每次发送的数据可以通过chunk接收, chunk是一个buffer对象
// console.log(chunk);
bufferlist.push(chunk);
})
// 2. 给req注册end事件,表示数据发送完毕
req.on('end',function(){
//end事件会在POST数据发送完毕触发,所以只能在end 事件中获取
//需要将数组中所有的buffer对象合并为一个buffer对象 buffer.concat()
var result = Buffer.concat(bufferlist);

//将result转为字符串即可 buffer-->string
// console.log(result.toString());\
//最后使用querystring将字符型转为对象
var params = querystring.parse(result.toString());
console.log(params);
})
res.end('ok');
})

server.listen(8888,function(){
console.log("http://localhost:8888");
})

13. npm简介

1
2
3
4
npm (node package manager) node的包管理工具.
npm 服务器
npm 网站 www.npmjs.com
npm 命令行工具
1
2
3
4
5
6
7
8
9
10
11
基本命令:()内参数可选
npm init (-y) :初始化package.json文件
npm install 包名:下载需要的资源包
npm install 包名@版本号:下载指定版本的资源包
npm i 包名:简写

npm uninstall 包名: 卸载资源包

npm install 包名 -g :全局安装
npm install live-server -g
npm install less -g

14. package.json文件的作用及属性说明

1
2
3
4
package.json:用来描述一个包的信息.
必须包含两个信息:name,version
name:包名(不能包含中文,空格,大写字母,特殊字符!);
version:包的版本信息;
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
{
"name": "05-19",
"version": "1.0.0",
"description": "",
"main": "exception.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"keywords": [],
"author": "",
"license": "ISC",
"dependencies": {
"mime": "^2.4.3"
}
}

15. nrm

1
2
3
4
5
6
设置npm让其从国内镜像下载.nrm
npm install nrm -g
查看服务器列表: nrm ls
产看当前正在使用的服务器 nrm current
切换服务器: nrm use 服务器名称
测试服务器速度: nrm test 服务器名称

16. hacker_news 案例

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
var http = require('http');
var server = http.createServer();
var fs = require('fs');
var path = require('path');
var mime = require('mime');
var url = require('url');
server.on('request',function(req,res){
var urlObj = url.parse(req.url,true);
//1. 设置路由规则
// /index 首页
// /details 详情页
// /submit 添加页
if(req.url == '/' || req.url == "/index"){
fs.readFile(path.join(__dirname,'views','index.html'),function(err,data){
res.end(data);
})

// res.end('首页');
}else if(urlObj.pathname == "/details"){
fs.readFile(path.join(__dirname,'views','details.html'),function(err,data){
res.end(data);
})
// res.end('详情页');
}else if(req.url == '/submit') {
fs.readFile(path.join(__dirname,'views','submit.html'),function(err,data){
res.end(data);
})

// res.end('添加页面');
}else if(req.url.indexOf("/resource") == 0){
res.setHeader("content-type",mime.getType(req.url));
fs.readFile(path.join(__dirname,req.url),function(err,data){
res.end(data);
})
// res.end("静态资源");
}else{
res.statusCode = 404;
res.statusMessage = "Not Found";
res.end("Not Found");
}
})

server.listen(8888,function(){
console.log("http://localhost:8888");
})

17. 使用文件存储数据

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
var fs = require('fs');
var path = require('path');
fs.readFile(path.join(__dirname,'data.json'),'utf8',function(err,data){
data = JSON.parse(data || '[]'); //若为空 转为空数组
console.log(data);
})

//添加新数据到data.json
var news = {
"title":"123",
"url":"",
"text":"睡大觉"

}
fs.readFile(path.join(__dirname,'data.json'),'utf8',function(err,data){
data = JSON.parse(data || '[]');

//将新数据添加至原有数据中
//需要给news添加id属性
news.id = data.length == 0 ? 1 : data[data.length -1].id +1 ;
data.push(news);
fs.writeFile(path.join(__dirname,'data.json'),JSON.stringify(data),function(err){
console.log("添加成功");

})
})

//根据id 查找对应数据
//读取所有数据 data.json
var id = 1
fs.readFile(path.join(__dirname,'data.json'),'utf8',function(err,data){
data = JSON.parse(data || '[]');
// data.forEach(function(v,i){
// if(v.id == id){
// console.log(v);
// }
// })
// 找数组中满足条件的第一个元素
var result = data.find(function(v,i){
return v.id == id;
});
console.log(result);
})

18. 数组常用方法

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
var arr = [1,2,3,4,5,6,7,8,9,10];
//遍历数组
arr.forEach(function(v,i){
console.log(v + "====>" + i);
})
//映射 map
var newArr = arr.map(function(v,i){
return v * 2;
})
console.log(newArr); // [ 2, 4, 6, 8, 10, 12, 14, 16, 18, 20 ]

// some方法 判断数组中是否有任意元素满足条件
//判断氏族中是否有偶数
var flag = arr.some(function(v,i){
return v % 2 == 0;
})
console.log(flag); //true

// 判断数组中是否满足所有条件
var flag = arr.every(function(v,i){
return v % 2 !== 0;
})
console.log(flag);//false

//查找数组中第一个满足条件的元素
var first = arr.find(function(v,i){
return v % 2 == 0;
})
console.log(first); // 2

//在数组中找所有满足条件的元素
var newdata = arr.filter(function(v,i){
return v % 2 == 0;
})
console.log(newdata); // [ 2, 4, 6, 8, 10 ]

19. art-template 使用

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
1. 安装 npm install art-template 
var template = require('art-template');
var path = require('path');
//以文件作为模板渲染
var obj = {
msg: "hello-world"
}

var result = template(path.join(__dirname,'tpl.html'),obj);
console.log(result);
-------------------------------------------------------------
//以字符串变量作为模板渲染
var str = "<div>{{msg}}</div>";
//1.根据模板字符串创建渲染函数
var render = template.compile(str);
//2.使用渲染函数 进行渲染
var result = render(obj);
console.log(result);

20. hacker_new 渲染数据

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
var http = require('http');
var server = http.createServer();
var fs = require('fs');
var path = require('path');
var mime = require('mime');
var url = require('url');
var template = require('art-template');
server.on('request',function(req,res){
var urlObj = url.parse(req.url,true);
//1. 设置路由规则
// /index 首页
// /details 详情页
// /submit 添加页

if(req.url == '/' || req.url == "/index"){
// fs.readFile(path.join(__dirname,'views','index.html'),function(err,data){
// res.end(data);
// })
//将数据渲染到页面
fs.readFile(path.join(__dirname,'views','index.html'),function(err,data){

fs.readFile(path.join(__dirname,'data.json'),'utf8',function(err,newslist){
newlist = JSON.parse(newslist || '[]');
//data 是模板 newslist是数据
var render = template.compile(data.toString());
var result = render({list:newslist});
res.end(result);
})
})
// res.end('首页');
}else if(urlObj.pathname == "/details"){
// fs.readFile(path.join(__dirname,'views','details.html'),function(err,data){
// res.end(data);
// })
//读取到模板文件内容
fs.readFile(path.join(__dirname,'views','details.html'),function(err,data){
//res.end(data);
//获取id GET请求参数
fs.readFile(path.join(__dirname,'views','data.json'),function(err,news){
//res.end(data);
//获取id GET请求参数
news = JSON.parse(news || '[]');
var news = news.find(function(v,i){
return v.id = urlObj.query.id;
})
var render = template.compile(data.toString());
var result = render({item:news});
res.end(result);
})
})
// res.end('详情页');
}else if(req.url == '/submit') {
// fs.readFile(path.join(__dirname,'views','submit.html'),function(err,data){
// res.end(data);
// })
fs.readFile(path.join(__dirname,'views','submit.html'),function(err,data){
// res.end(data);

})
// res.end('添加页面');
}else if(req.url.indexOf("/resource") == 0){
res.setHeader("content-type",mime.getType(req.url));
fs.readFile(path.join(__dirname,req.url),function(err,data){
res.end(data);
})
// res.end("静态资源");
}else if(urlObj.pathname == '/add'){
var news = urlObj.query;
//获取表单数据 urlObj.query
fs.readFile(path.join(__dirname,'data.json'),'utf8',function(err,newslist){
newslist = JSON.parse(newslist || '[]');
//增加id
news.id = newslist.length == 0 ? 1 : newslist[newslist.length -1 ].id + 1 ;
newslist.push(news);
fs.writeFile(path.join(__dirname,'data.json'),Json.stringify(newslist),function(){
// res.end('ok');
//跳转到首页 并展示
//设置状态码
res.statusCode = 302;
//设置状态信息
res.statusMessage = "Found";
//设置响应头中的location
res.setHeader('location','/index');
//结束响应
res.end();
})
})
//存储到data.json
}else{
res.statusCode = 404;
res.statusMessage = "Not Found";
res.end("Not Found");
}
})

server.listen(8888,function(){
console.log("http://localhost:8888");
})

21.封装读取数据的方法

1
2
3
4
5
6
7
8
9
10
11
var FILEPATH = path.join(__dirname,'data.json');
function readfile(callback){
fs.readFile(FILEPATH,'utf8',function(err,data){
data = JSON.parse(data || "[]");
callback(data);
})
}
//回调即可
readfile(function(newslist){
console.log(newslist);
})

22. 封装写数据的方法

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
function writefile(data,callback){
//读取原有数据
//设置新数据的id
data.id = newslist.length == 0 ? 1 : newslist[newslist.length - 1] + 1 ;
readfile(function(newslist){
//将新数据写入原有数据
newslist.push(data);
fs.writeFile(FILEPATH,JSON.stringify(newslist),function(err){
callback();
})
})
}
writefile({title:"",},function(){
console.log("ok");
})

23.封装render函数进行模板渲染

1
2
3
4
5
6
7
function render_tpl(filename,data,res){
fs.readFile(path.join(__dirname,'views',filename + '.html'),function(err,tpl){
var render = template.compile(tpl.toString());
var result = render(data);
res.end(result);
})
}

24. 封装读取id对应的数据

1
2
3
4
5
6
7
8
function getdatabyid(id,callback){
readfile(function(newslist){
var news = newslist.find(function(v,i){
return v.id = id;
})
callback(news);
})
}

25. 模块化

1
2
3
4
5
6
7
8
9
10
1.定义模块
一个js文件就是一个模块,每个模考有自己单独的作用域,防止相互影响.
2.引用模块
require函数引用模块
3.模块化的导出
module.exports 默认是一个空对象 给该对象添加属性,将内容导出;
module.exports.名称 = 要导出的内容
exports.名称 = 要导出的内容

var 导出项 = require('模块路径')

26. 模块化标准

1
2
3
1.AMD:异步模块定义 require.js 依赖前置
2.CMD:通用模块定义 sea.js 依赖延迟(后置)
3.CommonJs: Node.Js模块化标准

27. 封装页面跳转函数

1
2
3
4
5
6
function(url){
this.statasCode = 302;
this.statusMessage = "Found";
this.setHeader('location',url);
this.end();
}

28. Express介绍及路由注册方式

1
2
3
4
Express:基于node.js的web开发框架.
npm install express //下载资源
var express = require('express');//引入资源包
var app = express(); //创建实例
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
1. app.METHOD方式,请求方式必须与方法名一致;请求路径必须与注册路径一致;
app.get(路由路径,function(req,res){

}),
app.post(路由路径,function(req,res){

})
2. app.all 方式:任意请求方式都可以,请求路径必须与注册路由一致;
app.all(路由路径,function(req,res){

})
3. app.use方式:任意请求方式都可以,请求路径只要以注册路由路径开头即可!省略路由路径,默认为"/"
app.use(路由路径,function(req,res){

})

29.Express中req,res新增方法与属性

1
2
3
4
5
6
7
8
9
10
11
12
13
1. response
res.send(); //向浏览器相应数据 ,可接受的参数:对象,数组,字符串,数字(作为状态码使用),buffer对象
res.download(path.join(__dirname,'1.jpg'),'');//向浏览器响应文件 并下载
res.status(404).end();
res.json({name:"assasin",age:25});
res.jsonp({name:"assasin",age:25});//返回jsonp合适的数据,请求时需要传递callback函数
res.redirect("http://baidu.com");//重定向
res.sendFile(path.join(__dirname,'1.jpg')); //返回文件至浏览器
2. request
req.body; //获取POST请求参数,但不能直接使用
req.query; //获取GET请求参数,返回对象 可直接使用
req.originalUrl; //获取原始url地址 类似于req.url
req.params; //获取路由参数 返回参数对象

30. Express托管静态资源

1
app.use(express.static('public'));

31.Express中间件

1
2
3
4
5
6
7
中间件起始是一个函数,参数是req,res,next;next响应下一个请求

app.get('/index',function(req,res,next){
//要么结束响应,要么调起下一个
next();
//express默认有一个中间件,用来兜底的
})

32.获取POST请求参数

1
2
3
4
5
6
7
8
通过req.body 可以获取POST请求参数,但必须使用body-parser中间件;
npm install body-parser

var bodyParser = require('body-parser');
app.use(bodyParser.urlencoded());
app.post('/api',function(req,res){
console.log(req.body);
})

33.自己实现body-parser中间件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
var querystring = require('querystring');
app.use(function(req,res,next){
//获取Post请求参数,添加给req作为body属性
var bufferlist = [];
req.on('data',function(chunk){
bufferlist.push(chunk);
})
req.on('end',function(){
var result = Buffer.concat(bufferlist);
var result = result.toString()
//判断浏览器请求时发送的数据格式
// console.log(get('content-type'));
if(req.get('content-type').indexOf('json') != -1){
req.body = JSON.parse(result);
}else if(req.get('content-type').indexOf('urlencoded') != -1){
req.body = querystring.parse(result);
}else{
req.body = {};
}
})
next();
})

app.post('./add',function(req,res){
console.log(req.body);
})

34. Express中模板引擎使用

1
2
3
4
5
6
7
8
9
10
11
// 模板引擎的使用  app.engine(后缀名,require('express-art-template')) 用于为指定的后缀的模板指定对应的模板引擎
// npm install art-template
// npm install express-art-template
// 1.定义 HTML 对应的模板引擎
app.engine('html',require('express-art-template'));
// 2. 设置模板引擎所在目录,若不设置,默认去views中去找
//app.set('views',要指定的模板文件所在目录路径)
app.set('views',path.join(__dirname,'views'));

//3. 设置模板文件的默认后缀,调用render方法时文件后缀
app.set('view engine','html');

35. Express-hacker-news

1
2
3
4
5
1.资源包
npm install express
npm install art-template
npm install express-art-template
npm install body-parser
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
2. app.js
var express = require('express');
var app = express();
var path = require('path');
var storage = require('./storage');

app.use(require('body-parser').urlencoded());
app.use(require('body-parser').json());
app.use('/resources',express.static('resources'));//静态资源
//配置模板引擎
app.engine('html',require('express-art-template'));
app.set('view engine','html');

app.get('/index',function(req,res,next){
//读取数据
storage.getallnews(function(newslist){
res.render('index',{list:newslist});
})
//res.sendFile(path.join(__dirname,'views','index.html'));
})

app.get('/details',function(req,res,next){
storage.getnewsbyid(req.query.id,function(news){
res.render('details',{item:newslist});
})
//res.sendFile(path.join(__dirname,'views','details.html'));
})

app.get('/submit',function(req,res,next){

res.sendFile(path.join(__dirname,'views','submit.html'));
})

app.get('/add',function(req,res,next){
storage.addnews(req.query,function(){
res.redirect('index');
})
})
app.post('/add',function(req,res,next){
storage.addnews(req.body,function(){
res.redirect('index');
})
})
app.listen(8888,function(){
console.log("http://localhost:8888");
})
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
3. storage.js
var fs = require('fs');
var path = require('path');
var NEWSPATH = path.join(__dirname,'data.js');
module.exports = {
//读取所有数据
getallnews:function(callback){
fs.readfile(NEWSPATH,'utf8',function(err,data){
data = JSON.parse(data || '[]');
callback(data);
})
},
//读取id
getnewsbyid:function(id,callback){
this.getallnews(function(newslist) {
var news = newslist.find(function(v,i){
return v.id == id;
})
callback(news);
})
},
addnews:function(){
this.getallnews(function(newslist){
news.id = newslist.length==0?1:newslist[newslist.length -1 ].id +1 ;
newslist.push(news);
fs.writefile(NEWSPATH,JSON.stringify(newslist),function(err){
callback();
})
})
}

}

36. Mongodb

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
1.启动mongodb服务: mongod --dbpath 数据库文件所在目录
mongo --host 127.0.0.1 --port 27017
2.连接操作mongodb: mongo
3.查看现有数据库: show databases;
3.切换数据库: use 数据库名;
4.查看数据库中所有集合: show collections;
5.增: db.集合名称.insert(数据对象)
db.集合名称.insertMany([数据对象1,数据对象2])
6.删: db.集合名称.deleteOne(条件对象)
db.集合名称.deleteMany(条件对象)
7.改: db.集合名称.updateOne(条件对象,操作对象)
db.集合名称.updateMany(条件对象,操作对象)
db.db.users.updateMany({name:'shibin'},{$set:{age:18}}) //修改name=shibin的年龄为18
8.查: db.集合名称.find() //查询所有数据
db.集合名称.find(条件对象) //条件查询
条件对象 {age:{$gte:18}} //年龄大于等于18
{age:{$lt:18}} //小于
{age:{$gt:18}} //大于
{age:{$lte:18}} //小于等于
{age:{$gte:18}} //大于等于
{age:{$ne:18}} //不等于
{age:{$in:[18,28]}} //在一个范围内
{age:{$nin:[18,28]}} //不在这个范围内

37. Nodejs操作Mongodb

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
npm install mongodb  
//1. 引入mongo客户端
var mongoClient = require('mongodb').MongoClient;
//2. 创建连接字符串
var connStr = "mongodb://localhost:27017";
//3.调用mongoClient 方法 connect() 进行连接
mongoClient.connect(connStr,function(err,client){
//console.log('ok');
//client数据库客户端对象
//进行数据库增删改查操作
var db = client.db('test');
//通过dbd对象操作数据库集合
var users = db.collection('users');
//新增数据
//db.collection('users').insert();
users.insert({name:"史俊祥",age:25},function(err,dbresult){
console.log(dbresult.result);
})
users.insertMany([{name:"www",age:28},{name:"qqq",age:21}],function(err,dbresult){
console.log(dbresult.result);
})
//查询数据
users.find({age:18}).toArray(function(err,arr){
console.log(arr);
})
users.find({age:{$gt:18}}).toArray(function(err,arr){
console.log(arr);
})
//删除数据
users.deleteOne({age:18},function(err,dbreslt){
console.log(dbreslt.result);
})
//修改数据
users.updateOne({age:28},{$set:{sex:'female'}},function(err,dbresult){
console.log(dbresult.result);
})

//最后一步,关闭数据库链接
client.close();
})

38. 创建storage.js操作Mongodb

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
var mongoClient = require('mongodb').MongoClient;
var ObjectId = require('mongodb').ObjectId;
var connStr = "mongodb://localhost:27017";
var DBNAME = 'test';
var COLLECTION_NAME = 'news';
module.exports = {
getallnews:function(callback){
mongoClient.connect(connStr,function(err,client){
var db = client.db(DBNAME);
var news = db.collection(COLLECTION_NAME);

news.find().toArray(function(err,arr){
console.log(arr);
})
//关闭数据库链接
client.close();
})
},
getnewsbuid:function(id,callback){
//将mongodb中的_id 转化 Objectid
// 将 ObjectId('njbfjdjfskdj45s4df54dsf') 传进去即可;
mongoClient.connect(connStr,function(err,client){
var db = client.db(DBNAME);
var news = db.collection(COLLECTION_NAME);
db.find({_id:id}).toArray(function(err,arr){
callback(arr[0]);//取数组第一项即可
})
//关闭数据库链接
client.close();
})
},
addnews:function(info,callback){
mongoClient.connect(connStr,function(err,client){
var db = client.db(DBNAME);
var news = db.collection(COLLECTION_NAME);
//添加数据
news.insertOne(info,function(err,dbresult){
if(dbresult.result.ok == 1){
callback();
}
})
//关闭数据库链接
client.close();
})
},
}

39. hacker-news-api

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
前后端分离实现数据接口
api.js

var express = require('express');
var storage = require('storage');
var app = express();

//设置cors跨域
app.use(function(req,res,next){
res.header("Access-Control-Allow-Origin","*");
res.header("Access-Control-Allow-Method","GET,PUT,POST,DELETE");
res.header("Access-Control-Allow-Headers","Content-Type");
next();
})
app.get('/newslist',function(req,res,next){
storage.getallnews(function(newslist){
res.send({
errCode:200,
msg:'success',
data:newslist
});
})
})

api.get('/details',function(req,res,next){
storage.getnewsbyid(req.query.id,function(news){
res.send({
errCode:200,
msg:'success',
data:news
});
})
})
app.get('/addnews',function(req,res,next){
storage.addnews(req.query,function(){
res.send({
errCode:200,
msg:"success",
data:''
});
})
})

app.listen(8888,function(){
console.log("http://localhost:8888");
})

40. npm上传资源包

1
2
3
4
5
6
7
npm 官方网站注册账号
1. npm init 初始化资源包
2. 创建资源包
3. nrm use npm 切换至npm
4. npm login 登录npm
5. npm publish 发布资源包(不能与npm已有资源命名冲突)
6. npm version major/minnor/patch 修改版本号

41. npm编写命令行工具

1
2
3
4
5
6
7
8
9
10
11
1. 创建资源包
2. 实现命令行工具的功能
3. 在package.json中增加
"bin":{
"mkassasindir":"./index.js"
//命令别名:执行文件路径
}
4. 在资源最上方添加 #! node
5. 上传资源包至npm npm publish
6. 全局安装 npm install 资源包名 -g
7. 执行自己的命令行

42. ES6语法

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
ES6新增功能:
1. 变量声明的方式 let
let 变量名 (不能重复声明;有块级作用域)
2. const声明常量 不可重复声明;也有块级作用域;不能被修改值;
3. 对象的简写形式
属性简写:属性名与属性值的变量名同名;
方法简写:
let obj{
sing:function(){
console.log('sing')
}
}
可写成
let obj{
sing () {
console.log('sing')
}
}
4. 对象解构赋值
let {对象的属性名:要声明的变量名} = 对象
let {name:name} = obj; //将对象的name属性值赋值给name变量
5. import {要导入的内容1,要导入的内容2} from "模块"
6. 数组解构赋值
let arr = [1,2,3,4];
let [num1,num2,num3,num4] = arr;

let arr = [1,2,3,4,5]; //剩余元素只能有一个;只能为最后一个元素
let [num1,...num2] = arr; // [ 2, 3, 4, 5 ]
7. 箭头函数
let func2 = (参数列表) =>{
console.log(2222);//函数体
}
func2();
###简写形式###
如果参数列表中只有一个参数,那么参数()可以省略;
let f1 = (a) => {
console.log(a *2);
}
let f2 = a => {
console.log(a *2);
}
f2(10);
如果箭头函数的函数体只有一句代码,函数体的{}可以省略;
let f1 = (a) => {
console.log(a *2);
}
let f3 = a => console.log(a / 10);
如果箭头函数的函数体只有一句代码,并且这句代码是返回语句,那么return与{}都可省略;
let sum = (a,b) =>{
return a +b ;
}

let sum = (a,b) => a+ b;
8. this的用法
this的四种调用模式:
函数调用: this--->window 函数名()
方法调用: this--->调用 方法的对象 对象.方法名()
构造函数调用:this--->创建出来的实例 new 函数名()
上下文调用模式:this--->call和apply的第一个参数 函数名.call()

箭头函数中没有this,若果使用this,this会向上一级作用域中查找.
使用var that = this的场景全都可以使用箭头函数解决
// let obj = {
// name :"纯生",
// say (){
// var that = this ;
// setTimeout(function(){
// console.log("我叫"+that.name);
// },1000)
// }
// }
let obj = {
name :"纯生",
say (){
setTimeout(() =>{
console.log("我叫"+this.name);
},1000)
}
}
obj.say();
9. 箭头函数中没有arguments的说明:
arguments在函数被调用时,会将所有的实参存贮到arguments对象中
使用场景:传递参数个数不确定时,使用arguments获取所有实参
function sum(){
var result = 0;
for (let i = 0; i< arguments.length;i++){
result += arguments[i];
}
return result;
}
res = sum(1,2,3,5);
console.log(res);
10. 剩余参数 rest params 只能有一个,并且只能是参数列表的最后一个
function func(b,...a){
console.log(a);
}
func(1,2,3,4);
箭头函数获取不定个数的参数,就可使用剩余参数.
11. 函数参数默认值的使用
12. 使用变量值作为属性名:
var key = "name";
var obj = {
[key]:"assasin",
}
console.log(obj);
13. 对象扩展运算符
var obj = {
name :"assasin",
age:25
}

var ass = {
...obj
}
console.log(ass);
14. 运算符在数组传(参)递时的使用
function sum(a,b,c){
return a + b + c;
}
var arr = [2,3,5];
console.log(sum(...arr));
15. Class使用
// function Person(name,age){
// this.name = name;
// this.age = age;
// }
//


class Person {

//静态方法 es6不支持静态属性
static sayhi(){
console.log("sayhi方法");
}
//构造函数
constructor(name,age){
this.name = name;
this.age = age;
}
//实例方法
say(){
console.log("say方法");
}
}
//继承
class Student extends Person{

}
var stu = new Student();
console.log(stu);
console.log(stu.say());
Person.sayhi();
var p = new Person('assasin',28);
console.log(p);

16.super调用的原因:
function Person(){
this.name = "assasin";
this.age = 25;
}
function Student(){
Person.call(this);
}
var stu = new Student();
console.log(stu);


class Person{
constructor(){
this.name = "assasin";
this.age = 25;
}
}

class Student extends Person{
constructor(){
super();
}
}

43.Promise

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
promise用于解决回调地域问题.支持promise就可使用.then()方法来使用回调函数,
.then(成功的回调函数,失败的回调函数)
promise对象有三种状态:
① pending:挂起,正在执行
② fullfilled:成功
③ rejected:失败
.then()连写时,需要在回调函数中返回一个新的Promise对象

//promise APi
function timeout(time){
return new Promise(function(resolve, reject){
//要执行的内容
setTimeout(function(){
// console.log("1s");
//异步操作完成后,需要改变当前promise状态即可
//resolve调用改为成功
//reject调用艾薇失败
resolve(123);
// reject();
},time)
});
}
//调用
timeout(1000).then(function(data){
//要做的事情
console.log("1s",data);
},function(err){
console.log(err);
})
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
封装ajax:
function ajax(){
return new Promise((resolve, reject)=>{
//发送ajax请求
let xhr = new XMLHttpRequest();
xhr.open(option.type,option.url);
xhr.send(null);
xhr.onreadystatechange = function(){
if(xhr.readyState == 4 ){
if(xhr.status == 200){
resolve(xhr.responseText);
}else{
reject();
}
}
}
});

}
//调用
ajax({
url:"",
data:{},
type:"get"
}).then(function(data){
console.log(data);
})
1
2
3
promise静态方法all,race说明
all:在所有promise异步操作完成后,执行某个任务就可使用
race:在promise第一个异步完成后,就执行某个任务

nodejs压力测试(调整mongodb,redis,mysql最大连接数)

[TOC]

##简单测试:

siege https://blog.csdn.net/lshemail/article/details/79298357

siege.js是一个面向程序员友好的测试工具,主要以测试http接口(或页面), 需要手动写测试代码. 没有可视化的客户端. 适用于初级测试.

websocket-bench:https://github.com/M6Web/websocket-bench

nodejs tool to benchmark socket.io and faye websocket server.

websocket-benck是基于nodejs的工具. 主要用来测试websocket, 此工具三年前已暂停更新.

JMeter:https://jmeter.apache.org

是Apache基金会开源的100%纯JAVA桌面应用程序.被设计为用于测试客户端/服务端结构的软件(例如web应用程序)。它可以用来测试静态和动态资源的性能,例如:静态文件,Java Servlet,CGI Scripts,Java Object,数据库和FTP服务器等等。JMeter可用于模拟大量负载来测试一台服务器,网络或者对象的健壮性或者分析不同负载下的整体性能.

jmeter需要有java环境支持.

Jmeter(一)工具的简单介绍

修改系统最大文件数和最大线程数(centos7.2)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
有时候进程数太小会导致卡死服务状态 而使用ulimit -u增加的线程不是永久添加的  这时候需要改配置文件使其永久添加。
1,切换root权限
2,vim /etc/security/limits.conf   

# End of file
#root soft nofile 65535
#root hard nofile 65535
#* soft nofile 65535
#* hard nofile 65535
* soft noproc 127093
* hard noproc 127093
* soft nofile 127093
* hard nofile 127093

注释掉本来的配置 直接加新的就行  。
3,vim /etc/security/limits.d/20-nproc.conf

#*         soft    nproc     4096
#root      soft    nproc     unlimited
*          soft    nproc     127098
*          hard    nproc     204800

还是注掉原本存在的  直接加,然后重启  就会发现root和普通用户都是127098(自己设置)的线程。
最大文件数同理  nofile代表最大文件数 

---------------------
作者:BiuBiuBiu___
来源:CSDN
原文:https://blog.csdn.net/BiuBiuBiu___/article/details/80169358
版权声明:本文为博主原创文章,转载请附上博文链接!

mongodb调整最大连接数(3.6)

mongodb新版本中默认最大连接数是65536( maxIncomingConnections )

控制台登录mongodb使用以下命令查看

1
2
3
> db.serverStatus().connections;
{ "current" : 1, "available" : 203, "totalCreated" : 1 }
最大连接数=current数值+available数值

redis最大连接数

redis最大连接数默认是10000, 调整至30000

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
//查看redis最大连接数, 需要先登录redis控制台
> config get maxclients
1) "maxclients"
2) "10000"

//查看redis当前连接数
>info Clients
# Clients
connected_clients:16
client_longest_output_list:0
client_biggest_input_buf:0
blocked_clients:0

//修改redis最大连接数
临时修改:
config set maxclients 10208

//永久修改:
vim /etc/redis.conf
修改maxclients = 30000
修改并保存配置文件后重启redis

遇到问题

修改ulimit -n和config文件后并不管用,需要修改systemctl目录中的redis.server.d下的配置文件

1
2
3
4
5
6
cd /etc/systemd/system/redis.service.d
修改limit.conf文件中参数LimitNOFILE的值, 保存
执行如下命令
# systemctl daemon-reload
//重启redis
# systemctl restart redis

或者启动时手动指定配置文件或者指定最大连接数

1
2
redis-server /etc/redis.conf
或者 redis-server --maxclients=20000

mysql最大连接数

查看mariadb最大连接数, 默认为151

1
2
3
4
5
6
7
MariaDB [(none)]> show variables like 'max_connections';
+-----------------+-------+
| Variable_name | Value |
+-----------------+-------+
| max_connections | 151 |
+-----------------+-------+
1 row in set (0.00 sec)

配置/etc/my.conf,

[mysqld]新添加一行如下参数:

1
max_connections=10000

重启maradb服务后, 再次查看mariadb最大连接数, 并非是我们设置的10000

1
2
3
4
5
6
7
MariaDB [(none)]> show variables like 'max_connections';
+-----------------+-------+
| Variable_name | Value |
+-----------------+-------+
| max_connections | 214 |
+-----------------+-------+
1 row in set (0.00 sec)

这是由于mariadb有默认打开文件数限制。可以通过配置/usr/lib/systemd/system/mariadb.service来调大打开文件数目。

配置/usr/lib/systemd/system/mariadb.service

[Service]新添加两行如下参数:

1
2
LimitNOFILE=20000
LimitNPROC=20000

重新加载服务器并重启mariadb

1
2
# systemctl daemon-reload
# systemctl restart mariadb

###可能影响

Linux一般默认一个进程只有1024个文件描述符,需要通过ulimit修改

https://cnodejs.org/topic/5445caed9657d9ab12567e88

利用tornado搭建文档预览系统

1. 准备工作

​ 首先,我们需要下载前端的PDF预览JS框架PDF.js,它是一个网页端的PDF文件解析和预览框架,下载网址为:http://mozilla.github.io/pdf.js/
  接着,本项目还用到了showdown.js,该JS框架用于渲染Markdown文档。
  用Python做后端,tornado为web框架,笔者使用的版本为5.1.1

2.项目代码

​ 我们下载PDF.js项目代码,并在/pdfjs/web目录下新建files文件夹,用于存放上传的文件。为了能够用PDF.js实现PDF文件预览,需要切换至pdfjs文件夹,运行搭建文件服务器命令:

1
python -m http.server 8081

或者

1
python -m SimpleHTTPServer 8081

​ 接着介绍HTML文件,index.html是首页代码,主要实现文件上传功能,代码如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>文件上传</title>
</head>
<body>

<div align="center">
<br><br>
<h1>文件上传</h1>
<form action='file' enctype="multipart/form-data" method='post'>
<div class="am-form-group am-form-file">
<input id="doc-form-file" type="file" name="file" multiple>
</div>
<div id="file-list"></div>
<p>
<button type="submit" class="am-btn am-btn-default">提交</button>
</p>
</form>
</div>

</body>
</html>

markdown.html主要用于展示Markdown文件中的内容,代码如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
<!DOCTYPE html>
<html>

<head>
<meta charset="UTF-8">
<title>Markdown文件展示</title>
<script src="https://cdn.bootcss.com/showdown/1.9.0/showdown.min.js"></script>
<script>
function convert(){
var converter = new showdown.Converter();
var text = "{{ md_content }}";
var html = converter.makeHtml(text.replace(/newline/g, "\n"));
document.getElementById("result").innerHTML = html;
}
</script>
</head>

<body onload="convert()">
<div id="result" ></div>

</body>
</html>

注意,我们在head部分引用了showdown.js的CDN地址,这样就不用下载该项目文件了。
  最后是后端部分,采用Python的Tornado模块实现。tornado_file_receiver.py主要用于文档的上传和保存,并展示文档内容,完整代码如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
# -*- coding: utf-8 -*-
import os
import logging
import traceback
import tornado.ioloop
import tornado.web
from tornado import options

from parse_file import *


# 文档上传与解析
class UploadFileHandler(tornado.web.RequestHandler):
# get函数
def get(self):
self.render('upload.html')

def post(self):
# 文件的存放路径
upload_path = os.path.join(os.path.dirname(__file__), 'pdfjs/web/files')
# 提取表单中‘name’为‘file’的文件元数据
# 暂时只支持单文档的上传
file_meta = self.request.files['file'][0]
filename = file_meta['filename']
# 保存文件
with open(os.path.join(upload_path, filename), 'wb') as up:
up.write(file_meta['body'])

text = file_meta["body"]

# 解析文件的内容
mtype = file_meta["content_type"]
logging.info('POST "%s" "%s" %d bytes', filename, mtype, len(text))
if mtype in ["text/x-python", "text/x-python-script"]:
self.write(parse_python(str(text, encoding="utf-8")))
elif mtype in ["text/plain", "text/csv"]:
self.write(parse_text_plain(str(text, encoding="utf-8")))
elif mtype == "text/html":
self.write(str(text, encoding="utf-8"))
elif mtype.startswith("image"):
self.write(parse_image(mtype, text))
elif mtype == "application/json":
self.write(parse_application_json(str(text, encoding="utf-8")))
elif mtype == "application/pdf":
self.redirect("http://127.0.0.1:8081/web/viewer.html?file=files/%s" % filename)
elif mtype == "application/octet-stream" and filename.endswith(".md"):
self.render("markdown.html", md_content=r"%s" % str(text, encoding="utf-8").replace("\n", "newline"))
else: # 其余文件格式
try:
self.write(str(text, encoding="utf-8").replace("\n", "<br>"))
except Exception:
logging.error(traceback.format_exc())
self.write('<font color=red>系统不支持的文件解析格式!</font>')


def make_app():
return tornado.web.Application([(r"/file", UploadFileHandler)],
template_path=os.path.join(os.path.dirname(__file__), "templates")) # 模板路径


if __name__ == "__main__":
# Tornado configures logging.
options.parse_command_line()
app = make_app()
app.listen(8888)
tornado.ioloop.IOLoop.current().start()

parse_file.py用于解析各种格式的文档,并返回HTML展示的格式,完整代码如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
# -*- coding: utf-8 -*-
# author: Jclian91
# place: Pudong Shanghai
# time: 2020/6/5 1:05 下午
# filename: parse_file.py
# 用于解析各种文件类型的数据
import json
import base64
import logging
import traceback
from json import JSONDecodeError


# 解析text/plain或者text/csv文件格式
def parse_text_plain(text):
return "<html><head></head><body>%s</body></html>" % text.replace("\n", "<br>")


# 解析application/json文件格式
def parse_application_json(text):
try:
data_dict = json.loads(text)
return json.dumps(data_dict, ensure_ascii=False, indent=2).replace("\n", "<br>").replace(" ", "&nbsp;")
except JSONDecodeError:
try:
data_list = [json.loads(_) for _ in text.split("\n") if _]
return json.dumps(data_list, ensure_ascii=False, indent=2).replace("\n", "<br>").replace(" ", "&nbsp;")
except JSONDecodeError:
logging.error(traceback.format_exc())
return "JSON文件格式解析错误"
except Exception as err:
logging.error(traceback.format_exc())
return "未知错误: %s" % err


# 解析image/*文件格式
def parse_image(mtype, text):
return '<html><head></head><body><img src="data:%s;base64,%s"></body></html>' % \
(mtype, str(base64.b64encode(text), "utf-8"))


# 解析Python文件
def parse_python(text):
# indent和换行
text = text.replace("\n", "<br>").replace(" ", "&nbsp;").replace("\t", "&nbsp;" * 4)

# 关键字配色
color_list = ["gray", "red", "green", "blue", "orange", "purple", "pink", "brown", "wheat", "seagreen", "orchid", "olive"]
key_words = ["self", "from", "import", "def", ":", "return", "open", "class", "try", "except", '"', "print"]
for word, color in zip(key_words, color_list):
text = text.replace(word, '<font color=%s>%s</font>' % (color, word))

colors = ["peru"] * 7
punctuations = list("[](){}#")
for punctuation, color in zip(punctuations, colors):
text = text.replace(punctuation, '<font color=%s>%s</font>' % (color, punctuation))

html = "<html><head></head><body>%s</body></html>" % text

return html

3.实现方式

下面将进一步介绍各种格式实现预览的机制。

text/html: 如html文件等

  html文件的MIMETYPE为text/html,由于本项目采用HTML展示,因此对于text/html的文档,直接返回其内容就可以了。
  从Tornado的代码中我们可以看出,filename变量为文档名称,text为文档内容,bytes字符串。在前端展示的时候,我们返回其文档内容:

1
self.write(str(text, encoding="utf-8"))

其中,str(text, encoding="utf-8")是将bytes字符串转化为UTF-8编码的字符串。

text/plain: txt/log文件等

  txt/log等文件的MIMETYPE为text/plain,其与HTML文档的不同之处在于,如果需要前端展示,需要在返回的字符中添加HTML代码,如下(parse_file.py中的代码):

1
2
3
# 解析text/plain或者text/csv文件格式
def parse_text_plain(text):
return "<html><head></head><body>%s</body></html>" % text.replace("\n", "<br>")

text/csv: csv文件

  csv格式文件的MIMETYPE为text/csv,其预览的方式与txt/log等格式的文档一致。
  但csv是逗号分隔文件,数据格式是表格形式,因此在前端展示上应该有更好的效果。关于这一格式的文档,其前端预览的更好方式可以参考文章:利用tornado实现表格文件预览

application/json: json文件

  关于json文件的预览,笔者更关注的是json文件的读取。这里处理两种情况,一种是整个json文件就是json字符串,另一种情况是json文件的每一行都是json字符串。在前端展示的时候,采用json.dumps中的indent参数实现缩进,并转化为html中的空格,实现方式如下(parse_file.py中的代码):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# 解析application/json文件格式
def parse_application_json(text):
try:
data_dict = json.loads(text)
return json.dumps(data_dict, ensure_ascii=False, indent=2).replace("\n", "<br>").replace(" ", "&nbsp;")
except JSONDecodeError:
try:
data_list = [json.loads(_) for _ in text.split("\n") if _]
return json.dumps(data_list, ensure_ascii=False, indent=2).replace("\n", "<br>").replace(" ", "&nbsp;")
except JSONDecodeError:
logging.error(traceback.format_exc())
return "JSON文件格式解析错误"
except Exception as err:
logging.error(traceback.format_exc())
return "未知错误: %s" % err

笔者相信一定有json文件更好的前端展示方式,这里没有采用专门处理json的JS框架,这以后作为后续的改进措施。

application/pdf: pdf文件

  PDF文档的展示略显复杂,本项目借助了PDF.js的帮助,我们需要它来搭建PDF预览服务,这点在上面的项目代码部分的开头已经讲了。
  搭建好PDF预览服务后,由于上传的文件都会进入pdfjs/web/files目录下,因此PDF文档预览的网址为:http://127.0.0.1:8081/web/viewer.html?file=files/pdf_name ,其中pdf_name为上传的PDF文档名称。
  有了这个PDF预览服务后,我们展示PDF文档的代码就很简单了(tornado_file_receiver.py中的代码):

1
2
elif mtype == "application/pdf":
self.redirect("http://127.0.0.1:8081/web/viewer.html?file=files/%s" % filename)

text/x-python: Python脚本文件

  Python脚本的处理方式并不复杂,无非是在把Python文档转化为HTML文件格式的时候,加入缩进、换行处理,以及对特定的Python关键字进行配色,因此代码如下(parse_file.py中的代码):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
# 解析Python文件
def parse_python(text):
# indent和换行
text = text.replace("\n", "<br>").replace(" ", "&nbsp;").replace("\t", "&nbsp;" * 4)

# 关键字配色
color_list = ["gray", "red", "green", "blue", "orange", "purple", "pink", "brown", "wheat", "seagreen", "orchid", "olive"]
key_words = ["self", "from", "import", "def", ":", "return", "open", "class", "try", "except", '"', "print"]
for word, color in zip(key_words, color_list):
text = text.replace(word, '<font color=%s>%s</font>' % (color, word))

colors = ["peru"] * 7
punctuations = list("[](){}#")
for punctuation, color in zip(punctuations, colors):
text = text.replace(punctuation, '<font color=%s>%s</font>' % (color, punctuation))

html = "<html><head></head><body>%s</body></html>" % text

return html

根据笔者的了解,其实有更好的Python脚本内容的预览方式,可以借助handout模块实现,这点笔者将会在后续加上。

image/*: 各种图片文件,比如jpg, png等

  图片文件在HTML上的展示有很多中,笔者采用的方式为:

1
<img src="data:image/png;base64,ABKAMNDKSJFHVCJSNVOIEJHVUEHVUV==">

就是对图片读取后的字符串进行base64编码即可,因此实现代码如下(parse_file.py中的代码):

1
2
3
4
5
import base64
# 解析image/*文件格式
def parse_image(mtype, text):
return '<html><head></head><body><img src="data:%s;base64,%s"></body></html>' % \
(mtype, str(base64.b64encode(text), "utf-8"))

markdown文件

  markdown文件的预览稍显复杂,借助showdown.js和不断的尝试探索,由于markdown在读取后的换行符\n在转化为JavaScript字符串时并不需要转义,这是实现预览的难点。笔者的做法是把Python读取的markdown中的换行符\n转化为newline,并在JS渲染的时候才把newline替换成\n,这就解决了不需要转移的难题。具体的实现可以参考markdown.html,现在Python后端代码中把Python读取的markdown中的换行符\n转化为newline,代码如下:

1
2
elif mtype == "application/octet-stream" and filename.endswith(".md"):
self.render("markdown.html", md_content=r"%s" % str(text, encoding="utf-8").replace("\n", "newline"))

接着在markdown.html中的JS部分把Python读取的markdown中的换行符\n转化为newline,代码如下:

1
2
3
4
5
6
7
8
<script>
function convert(){
var converter = new showdown.Converter();
var text = "{{ md_content }}";
var html = converter.makeHtml(text.replace(/newline/g, "\n"));
document.getElementById("result").innerHTML = html;
}
</script>