• SEMI-SUPERVISED CLASSIFICATION WITH GRAPH CONVOLUTIONAL NETWORKS

    本文提出了一种,直接作用于图的卷积神经网络.

    INTRODUCTION

    作者发现可以用过 图拉普拉斯正则项来优化损失函数

    L0代表监督损失函数,f()代表可微神经网络函数,λ代表权重因子 X 是 Xi的向量的矩阵。
    ∆ = D − A

    拉普拉斯矩阵

    L = D - A
    D 是 degree matrix 度矩阵

    A 是 adjacency matrix


    如果 i=j 是 是vi的度
    -1 i不等于j vi 和 vj 邻近
    否则的话 是 0 不邻接

    adjacency matrix

    1. 简单图(无环)只包含0,1.
    2. 无向图的邻接矩是对称的
    3. 邻接矩阵的N此方,(i,j)表示从i到j经过n的边 的 个数
    4. A^3的迹/6代表 无向图中三角形的个数

    Spectrum

    The adjacency matrix of an undirected simple graph is symmetric, and therefore has a complete set of real eigenvalues and an orthogonal eigenvector basis. The set of eigenvalues of a graph is the spectrum of the graph.[5] It is common to denote the eigenvalues by λ1>λ2>…>λn
    实特征值 和 正交的特征向量

    1. λ1 大于最大的度数 Perron–Frobenius theorem 证明
    2. v 是和 λ1 相关的特征向量
    3. x the component in which v has maximum absolute value x 最大的绝对值
    4. λ1 - λ2 叫做 spectral gap
    5. spectral radius 是 max λi λi<d

    A1 and A2 are similar and therefore have the same minimal polynomial, characteristic polynomial, eigenvalues, determinant and trace.

    Regular graph

    拥有相同邻边的点 叫做常规图

    Symmetric normalized Laplacian 对称的规范化的拉普拉斯算子

    损失函数


    本文使用f(X, A) 神经网络去拟合一个函数

    代码

    import torch
    from torch.nn import Parameter
    from torch_scatter import scatter_add
    from torch_geometric.nn.conv import MessagePassing
    from torch_geometric.utils import add_remaining_self_loops
    
    from ..inits import glorot, zeros
    
    
    class GCNConv(MessagePassing):
        r"""The graph convolutional operator from the `"Semi-supervised
        Classfication with Graph Convolutional Networks"
        <https://arxiv.org/abs/1609.02907>`_ paper
    
        .. math::
            \mathbf{X}^{\prime} = \mathbf{\hat{D}}^{-1/2} \mathbf{\hat{A}}
            \mathbf{\hat{D}}^{-1/2} \mathbf{X} \mathbf{\Theta},
    
        where :math:`\mathbf{\hat{A}} = \mathbf{A} + \mathbf{I}` denotes the
        adjacency matrix with inserted self-loops and
        :math:`\hat{D}_{ii} = \sum_{j=0} \hat{A}_{ij}` its diagonal degree matrix.
    
        Args:
            in_channels (int): Size of each input sample.
            out_channels (int): Size of each output sample.
            improved (bool, optional): If set to :obj:`True`, the layer computes
                :math:`\mathbf{\hat{A}}` as :math:`\mathbf{A} + 2\mathbf{I}`.
                (default: :obj:`False`)
            cached (bool, optional): If set to :obj:`True`, the layer will cache
                the computation of :math:`{\left(\mathbf{\hat{D}}^{-1/2}
                \mathbf{\hat{A}} \mathbf{\hat{D}}^{-1/2} \right)}`.
                (default: :obj:`False`)
            bias (bool, optional): If set to :obj:`False`, the layer will not learn
                an additive bias. (default: :obj:`True`)
            **kwargs (optional): Additional arguments of
                :class:`torch_geometric.nn.conv.MessagePassing`.
        """
    
        def __init__(self,
                     in_channels,
                     out_channels,
                     improved=False,
                     cached=False,
                     bias=True,
                     **kwargs):
            super(GCNConv, self).__init__(aggr='add', **kwargs)
    
            self.in_channels = in_channels
            self.out_channels = out_channels
            self.improved = improved
            self.cached = cached
            self.cached_result = None
    
            self.weight = Parameter(torch.Tensor(in_channels, out_channels))
    
            if bias:
                self.bias = Parameter(torch.Tensor(out_channels))
            else:
                self.register_parameter('bias', None)
    
            self.reset_parameters()
    
        def reset_parameters(self):
            glorot(self.weight)
            zeros(self.bias)
            self.cached_result = None
            self.cached_num_edges = None
    
    
        @staticmethod
        def norm(edge_index, num_nodes, edge_weight, improved=False, dtype=None):
            if edge_weight is None:
                edge_weight = torch.ones((edge_index.size(1), ),
                                         dtype=dtype,
                                         device=edge_index.device)
    
            fill_value = 1 if not improved else 2
            edge_index, edge_weight = add_remaining_self_loops(
                edge_index, edge_weight, fill_value, num_nodes)
    
            row, col = edge_index
            deg = scatter_add(edge_weight, row, dim=0, dim_size=num_nodes)
            deg_inv_sqrt = deg.pow(-0.5)
            deg_inv_sqrt[deg_inv_sqrt == float('inf')] = 0
    
            return edge_index, deg_inv_sqrt[row] * edge_weight * deg_inv_sqrt[col]
    
    
        def forward(self, x, edge_index, edge_weight=None):
            """"""
            x = torch.matmul(x, self.weight)
    
            if self.cached and self.cached_result is not None:
                if edge_index.size(1) != self.cached_num_edges:
                    raise RuntimeError(
                        'Cached {} number of edges, but found {}'.format(
                            self.cached_num_edges, edge_index.size(1)))
    
            if not self.cached or self.cached_result is None:
                self.cached_num_edges = edge_index.size(1)
                edge_index, norm = self.norm(edge_index, x.size(0), edge_weight,
                                             self.improved, x.dtype)
                self.cached_result = edge_index, norm
    
            edge_index, norm = self.cached_result
    
            return self.propagate(edge_index, x=x, norm=norm)
    
    
        def message(self, x_j, norm):
            return norm.view(-1, 1) * x_j
    
        def update(self, aggr_out):
            if self.bias is not None:
                aggr_out = aggr_out + self.bias
            return aggr_out
    
        def __repr__(self):
            return '{}({}, {})'.format(self.__class__.__name__, self.in_channels,
                                       self.out_channels)
    import torch
    from torch_geometric.nn import MessagePassing
    from torch_geometric.utils import add_self_loops, degree
    
    class GCNConv(MessagePassing):
        def __init__(self, in_channels, out_channels):
            super(GCNConv, self).__init__(aggr='add')  # "Add" aggregation.
            self.lin = torch.nn.Linear(in_channels, out_channels)
    
        def forward(self, x, edge_index):
            # x has shape [N, in_channels]
            # edge_index has shape [2, E]
    
            # Step 1: Add self-loops to the adjacency matrix.
            edge_index, _ = add_self_loops(edge_index, num_nodes=x.size(0))
    
            # Step 2: Linearly transform node feature matrix.
            x = self.lin(x)
    
            # Step 3-5: Start propagating messages.
            return self.propagate(edge_index, size=(x.size(0), x.size(0)), x=x)
    
        def message(self, x_j, edge_index, size):
            # x_j has shape [E, out_channels]
    
            # Step 3: Normalize node features.
            row, col = edge_index
            deg = degree(row, size[0], dtype=x_j.dtype)
            deg_inv_sqrt = deg.pow(-0.5)
            norm = deg_inv_sqrt[row] * deg_inv_sqrt[col]
    
            return norm.view(-1, 1) * x_j
    
        def update(self, aggr_out):
            # aggr_out has shape [N, out_channels]
    
            # Step 5: Return new node embeddings.
            return aggr_out

    import torch
    from torch.nn import Sequential as Seq, Linear, ReLU
    from torch_geometric.nn import MessagePassing
    
    class EdgeConv(MessagePassing):
        def __init__(self, in_channels, out_channels):
            super(EdgeConv, self).__init__(aggr='max') #  "Max" aggregation.
            self.mlp = Seq(Linear(2 * in_channels, out_channels),
                           ReLU(),
                           Linear(out_channels, out_channels))
    
        def forward(self, x, edge_index):
            # x has shape [N, in_channels]
            # edge_index has shape [2, E]
    
            return self.propagate(edge_index, size=(x.size(0), x.size(0)), x=x)
    
        def message(self, x_i, x_j):
            # x_i has shape [E, in_channels]
            # x_j has shape [E, in_channels]
    
            tmp = torch.cat([x_i, x_j - x_i], dim=1)  # tmp has shape [E, 2 * in_channels]
            return self.mlp(tmp)
    
        def update(self, aggr_out):
            # aggr_out has shape [N, out_channels]
    
            return aggr_out
    上一篇:
    HOW POWERFUL ARE GRAPH NEURAL NETWORKS?
    下一篇:
    SplineCNN:Fast Geometric Deep Learning with Continuous B-Spline Kernels
    本文目录
    本文目录