import tf_slim as slim File "/home/magdalena/.local/lib/python2.7/site-packages/tf_slim/init.py", line 25, in from tf_slim.layers import * File "/home/magdalena/.local/lib/python2.7/site-packages/tf_slim/layers/init.py", line 25, in ...
One example of when one might want this are activation functions as ONNX always has them as separate nodes in the graph while Flux allows them to be inside the layers. This is by no means a necessary step, but I though it was nice to allow for things like CUDA optimizations as well ...
layers import common_layers import tensorflow as tf import tensorflow.compat.v1 as tf from tensorflow.contrib.eager.python import tfe as contrib_eager 2 magenta/models/shared/events_rnn_graph.py @@ -23,7 +23,7 @@ import magenta import numpy as np import six import tensorflow as tf ...
It is up to the person who used the Split in the Chain to make sure that the subsequent layer accepts the tuple as an input. We already do this when we x -> reshape(x, :) between convolution and fully-connected layers. Generally speaking, I am not in favor of adding more layers ...