Problem Desciption: When building the engine with the '--gather_all_token_logits' option, there seems to be an issue. If constructed with '--gather_all_token_logits', there is a high probability of garbled characters appearing in the fir...
token indices for the given batch. """loss =0.0# Convert token indices to embeddings -> T*B*Ey_emb = self.emb(y)# Get initial hidden stateh = self.f_init(*ctx_dict['txt'])# -1: So that we skip the timestep where input is <eos>fortinrange(y_emb.shape[0] -1): log_p, ...
The following are. You can vote up the ones you like or vote down the ones you don't like, and go to the original project or source file by following the links above each example. You may also want to check out all available functions/classes of the moduletensorflow.compat.v1, or tr...
# 需要导入模块: import tensorflow [as 别名]# 或者: from tensorflow importgather_nd[as 别名]defargmax_with_score(logits, axis=None):"""Argmax along with the value."""axis = axisorlen(logits.get_shape()) -1predictions = tf.argmax(logits, axis=axis) logits_shape = shape_list(logits) ...
# 需要导入模块: from tensorflow.compat import v1 [as 别名]# 或者: from tensorflow.compat.v1 importbatch_gather[as 别名]def_top_p_sample(logits, ignore_ids=None, num_samples=1, p=0.9):""" Does top-p sampling. if ignore_ids is on, then we will zero out those logits. ...