一、CNN情感分类中的面向对象部分
sparse.py
1 super(Embedding, self).__init__()
表示需要父类初始化,即要运行父类的_init_(),如果没有这个,则要自定义初始化
1 self.weight = Parameter(torch.Tensor(num_embeddings, embedding_dim))
Parameter跳转
1 class Parameter(Variable): 2 """A kind of Variable that is to be considered a module parameter. 3 4 Parameters are :class:`~torch.autograd.Variable` subclasses, that have a 5 very special property when used with :class:`Module` s - when they‘re 6 assigned as Module attributes they are automatically added to the list of 7 its parameters, and will appear e.g. in :meth:`~Module.parameters` iterator. 8 Assigning a Variable doesn‘t have such effect. This is because one might 9 want to cache some temporary state, like last hidden state of the RNN, in 10 the model. If there was no such class as :class:`Parameter`, these 11 temporaries would get registered too. 12 13 Another difference is that parameters can‘t be volatile and that they 14 require gradient by default. 15 16 Arguments: 17 data (Tensor): parameter tensor. 18 requires_grad (bool, optional): if the parameter requires gradient. See 19 :ref:`excluding-subgraphs` for more details. 20 """ 21 def __new__(cls, data=None, requires_grad=True): 22 return super(Parameter, cls).__new__(cls, data, requires_grad=requires_grad) 23 24 def __repr__(self): 25 return ‘Parameter containing:‘ + self.data.__repr__()
Parameter类中,data不是self.data来的,所以是父类的。只有在_init_()中self.data的才能追加进去,若在其他函数中,跳转到父类中,则是父类的data
24,25行函数,是实现一个子类对父类包装的功能。
__init__ 、__new__、__call__区分:
1 class O(object): 2 def __init__(self, *args, **kwargs): 3 print "init" 4 super(O, self).__init__(*args, **kwargs) 5 6 def __new__(cls, *args, **kwargs): 7 print "new", cls 8 return super(O, cls).__new__(cls, *args, **kwargs) 9 10 def __call__(self, *args, **kwargs): 11 print "call" 12 13 oo = O() 14 print "________" 15 oo()
结果如下:
1 new
2 init
3 ________
4 call
conv.py
1 class Conv2d(_ConvNd): 2 r"""Applies a 2D convolution over an input signal composed of several input 3 planes. 5 """ 6 7 def __init__(self, in_channels, out_channels, kernel_size, stride=1, 8 padding=0, dilation=1, groups=1, bias=True): 9 kernel_size = _pair(kernel_size) 10 stride = _pair(stride) 11 padding = _pair(padding) 12 dilation = _pair(dilation) 13 super(Conv2d, self).__init__( 14 in_channels, out_channels, kernel_size, stride, padding, dilation, 15 False, _pair(0), groups, bias) 16 17 def forward(self, input): 18 return F.conv2d(input, self.weight, self.bias, self.stride, 19 self.padding, self.dilation, self.groups)
_pair()跳转到utils.py
1 def _ntuple(n): 2 def parse(x): 3 if isinstance(x, collections.Iterable): 4 return x 5 return tuple(repeat(x, n)) 6 return parse 7 8 _single = _ntuple(1) 9 _pair = _ntuple(2) 10 _triple = _ntuple(3) 11 _quadruple = _ntuple(4)
这是一个函数式编程的写法,涉及函数嵌套。举例如下:
1 def two_dim(y): 2 def one_dim(x): 3 return x*x + y 4 return one_dim 5 6 one_dim_plus_one = two_dim(1) 7 8 print(one_dim_plus_one) # 对象地址 <function two_dim.<locals>.one_dim at 0x0000012F6DBFCB70> 9 print(one_dim_plus_one(2)) # 5 10 print(one_dim_plus_one(3)) #10 11 13 # f = x*x+y 14 # g1 = f(x,1) 15 f = lambda x,y:x*x+y 16 g1 = lambda x:f(x,1) # x*x+1 17 18 print(g1) # f(x,1)的地址? 19 print(g1(3)) # 10
1. repeat(x, n)跳转之后只有_init_() pass,这是pytorch框架中,是ide生成的临时文件,由C语言实现的标准库内置函数或是外部接口。
2. tuple([iterable])
什么是可迭代对象?列表、字符串。。。
1 if isinstance(x, collections.Iterable): 2 return x 3 return tuple(repeat(x, n))
1 x = ‘hello‘ 2 print(_quadruple(x)) # ‘hello‘是可迭代对象,输出‘hello‘ 3 4 x = 2 # 2不是可迭代对象,输出(2,2,2,2) 5 print(_quadruple(x))
3. isinstance()函数是内置函数(内置函数不用导入就可以使用!那len()呢?)
命令行help查看isinstance函数
1 isinstance(obj, class_or_tuple, /)
内置函数表如下:
_xxx_()是标准库函数
各种函数注意区分。
1 a = Notebook() 2 isinstance(a, Notebook) # True 3 4 class Test(Notebook):pass 5 issubclass(Test, Notebook) # True
4. F.conv2d
1 def forward(self, input): 2 return F.conv2d(input, self.weight, self.bias, self.stride, 3 self.padding, self.dilation, self.groups)
conv2d跳转到functional.py(from .. import functional as F)中conv2d
functional.py
1 # Convolutions 2 ConvNd = torch._C._functions.ConvNd 3 4 def conv2d(input, weight, bias=None, stride=1, padding=0, dilation=1, 5 groups=1): 6 """Applies a 2D convolution over an input image composed of several input 7 planes. 8 9 See(参考) :class:`~torch.nn.Conv2d` for details and output shape. 10 """ 11 f = ConvNd(_pair(stride), _pair(padding), _pair(dilation), False, 12 _pair(0), groups, torch.backends.cudnn.benchmark, torch.backends.cudnn.enabled) 13 return f(input, weight, bias)
Conv2d是对conv2d包装。
1 # AttributeError: ‘Child‘ object has no attribute ‘data‘ 2 class Parent: 3 def __init__(self): 4 self.data =12 5 6 class Child(Parent): 7 def __init__(self): 8 pass 9 super().__init__() 10 11 a = Child() 12 print(a.data) 13 14 15 # 12 16 class Parent: 17 def __init__(self): 18 self.data =12 19 20 class Child(Parent): 21 def __init__(self): 22 23 super().__init__() 24 25 a = Child() 26 print(a.data) 27 28 29 # 12 30 class Parent: 31 def __init__(self): 32 self.data =12 33 34 class Child(Parent): 35 def __init__(self): 36 self.data = 25 37 super().__init__() 38 39 a = Child() 40 print(a.data) 41 42 43 # 25 44 class Parent: 45 def __init__(self): 46 self.data =12 47 48 class Child(Parent): 49 def __init__(self): 50 51 super().__init__() 52 self.data = 25 53 54 a = Child() 55 print(a.data)
二、面向对象编程python3
1. 列表生成式
1 def search(self, filter): 2 return [note for note in self.notes if note.match(filter)]
等价于
1 def search(self, filter): 2 temp = [] 3 for note in self.notes: 4 if note.match(filter): 5 temp.append(note)
return temp
列表推导式目标是生成临时列表,举例如下:
1 f = [x for if for if for for if] 2 # 等价于 3 temp_list = [] 4 for st: 5 if st: 6 ... 7 if: 8 temp_list.append(x) 9 f = temp_list 10 11 # 12 f = [] 13 for st: 14 if st: 15 ... 16 if: 17 f.append(x) 18 19 # x*y for x in range(1,10) for y in range(1,10) 20 a = [] 21 for x in range(1,10): 22 for y in range(1,10): 23 a.append(x*y)
2. global
1 var = 13 2 3 def test(): 4 global var 5 var = 12 6 print(var) 7 8 # print(var) 9 var = 25 10 test()
书上部分未完
原文:http://www.cnblogs.com/Joyce-song94/p/7119287.html