网站的二级导航怎么做,html怎么做网页,在线定制头像免费,驻马店360网站建设摘要
BERT#xff08;Bidirectional Encoder Representations from Transformers#xff09;是一种基于Transformer架构的预训练模型#xff0c;在自然语言处理领域取得了显著的成果。本文详细介绍了BERT模型中的几个关键组件及其实现#xff0c;包括激活函数、变量初始化…摘要
BERTBidirectional Encoder Representations from Transformers是一种基于Transformer架构的预训练模型在自然语言处理领域取得了显著的成果。本文详细介绍了BERT模型中的几个关键组件及其实现包括激活函数、变量初始化、嵌入查找、层归一化等。通过深入理解这些组件读者可以更好地掌握BERT模型的工作原理并在实际应用中进行优化和调整。
1. 引言
BERT模型由Google研究人员于2018年提出通过大规模的无监督预训练和任务特定的微调显著提升了多个自然语言处理任务的性能。本文将重点介绍BERT模型中的几个核心组件包括激活函数、变量初始化、嵌入查找、层归一化等并提供相应的代码实现。
2. 激活函数
2.1 Gaussian Error Linear Unit (GELU)
GELU是一种平滑的ReLU变体其数学表达式如下
GELU(x)x⋅Φ(x)GELU(x)x⋅Φ(x)
其中Φ(x)Φ(x)是标准正态分布的累积分布函数CDF。在TensorFlow中GELU可以通过以下方式实现
def gelu(x):Gaussian Error Linear Unit.This is a smoother version of the RELU.Original paper: https://arxiv.org/abs/1606.08415Args:x: float Tensor to perform activation.Returns:x with the GELU activation applied.cdf 0.5 * (1.0 tf.tanh((np.sqrt(2 / np.pi) * (x 0.044715 * tf.pow(x, 3)))))return x * cdf
2.2 激活函数映射
为了方便使用不同的激活函数我们定义了一个映射函数get_activation该函数根据传入的字符串返回相应的激活函数
def get_activation(activation_string):Maps a string to a Python function, e.g., relu tf.nn.relu.Args:activation_string: String name of the activation function.Returns:A Python function corresponding to the activation function. Ifactivation_string is None, empty, or linear, this will return None.If activation_string is not a string, it will return activation_string.Raises:ValueError: The activation_string does not correspond to a knownactivation.if not isinstance(activation_string, six.string_types):return activation_stringif not activation_string:return Noneact activation_string.lower()if act linear:return Noneelif act relu:return tf.nn.reluelif act gelu:return geluelif act tanh:return tf.tanhelse:raise ValueError(Unsupported activation: %s % act)
3. 变量初始化
在深度学习中合理的变量初始化对于模型的收敛速度和最终性能至关重要。BERT模型中使用了截断正态分布初始化器truncated_normal_initializer其标准差为0.02
def create_initializer(initializer_range0.02):Creates a truncated_normal_initializer with the given range.return tf.truncated_normal_initializer(stddevinitializer_range)
4. 嵌入查找
嵌入查找是将输入的token id转换为向量表示的过程。BERT模型中使用了两种方法一种是使用tf.gather()另一种是使用one-hot编码
def embedding_lookup(input_ids,vocab_size,embedding_size128,initializer_range0.02,word_embedding_nameword_embeddings,use_one_hot_embeddingsFalse):Looks up words embeddings for id tensor.Args:input_ids: int32 Tensor of shape [batch_size, seq_length] containing wordids.vocab_size: int. Size of the embedding vocabulary.embedding_size: int. Width of the word embeddings.initializer_range: float. Embedding initialization range.word_embedding_name: string. Name of the embedding table.use_one_hot_embeddings: bool. If True, use one-hot method for wordembeddings. If False, use tf.gather().Returns:float Tensor of shape [batch_size, seq_length, embedding_size].if input_ids.shape.ndims 2:input_ids tf.expand_dims(input_ids, axis[-1])embedding_table tf.get_variable(nameword_embedding_name,shape[vocab_size, embedding_size],initializercreate_initializer(initializer_range))flat_input_ids tf.reshape(input_ids, [-1])if use_one_hot_embeddings:one_hot_input_ids tf.one_hot(flat_input_ids, depthvocab_size)output tf.matmul(one_hot_input_ids, embedding_table)else:output tf.gather(embedding_table, flat_input_ids)input_shape get_shape_list(input_ids)output tf.reshape(output,input_shape[0:-1] [input_shape[-1] * embedding_size])return (output, embedding_table)
5. 层归一化
层归一化Layer Normalization是一种常用的归一化技术用于加速训练过程并提高模型的泛化能力。BERT模型中使用了tf.contrib.layers.layer_norm来进行层归一化
def layer_norm(input_tensor, nameNone):Run layer normalization on the last dimension of the tensor.return tf.contrib.layers.layer_norm(inputsinput_tensor, begin_norm_axis-1, begin_params_axis-1, scopename)
为了方便使用我们还定义了一个组合函数layer_norm_and_dropout该函数先进行层归一化再进行dropout操作
def layer_norm_and_dropout(input_tensor, dropout_prob, nameNone):Runs layer normalization followed by dropout.output_tensor layer_norm(input_tensor, name)output_tensor dropout(output_tensor, dropout_prob)return output_tensor
6. Dropout
Dropout是一种常用的正则化技术用于防止模型过拟合。在BERT模型中dropout的概率可以通过配置参数进行设置
def dropout(input_tensor, dropout_prob):Perform dropout.Args:input_tensor: float Tensor.dropout_prob: Python float. The probability of dropping out a value (NOT of*keeping* a dimension as in tf.nn.dropout).Returns:A version of input_tensor with dropout applied.if dropout_prob is None or dropout_prob 0.0:return input_tensoroutput tf.nn.dropout(input_tensor, 1.0 - dropout_prob)return output
7. 从检查点加载变量
在微调过程中通常需要从预训练的模型检查点中加载变量。get_assignment_map_from_checkpoint函数用于计算当前变量与检查点变量的映射关系
def get_assignment_map_from_checkpoint(tvars, init_checkpoint):Compute the union of the current variables and checkpoint variables.assignment_map {}initialized_variable_names {}name_to_variable collections.OrderedDict()for var in tvars:name var.namem re.match(^(.*):\\d$, name)if m is not None:name m.group(1)name_to_variable[name] varinit_vars tf.train.list_variables(init_checkpoint)assignment_map collections.OrderedDict()for x in init_vars:(name, var) (x[0], x[1])if name not in name_to_variable:continueassignment_map[name] nameinitialized_variable_names[name] 1initialized_variable_names[name :0] 1return (assignment_map, initialized_variable_names)
8. 结论
本文详细介绍了BERT模型中的几个核心组件包括激活函数、变量初始化、嵌入查找、层归一化等。通过深入理解这些组件读者可以更好地掌握BERT模型的工作原理并在实际应用中进行优化和调整。希望本文能为读者在自然语言处理领域的研究和开发提供有益的参考。