machine learning - Model in Tensorflow is not Working need review of the code not sure whats going Wrong -


i modifying deep mnist code own data. modified model bit facing basic issues pass data model 1 one , runs reall fast when pass model examples @ ones gets slow , getting 0% accuracy. kindly review code doing horribly wrong not know , steps should follow make correct.

here model

def deepnn(x): """deepnn builds graph deep net classifying digits. args: x: input tensor dimensions (n_examples, 784), 784 number of pixels in standard mnist image. returns: tuple (y, keep_prob). y tensor of shape (n_examples, 10), values equal logits of classifying digit 1 of 10 classes (the digits 0-9). keep_prob scalar placeholder probability of dropout. """  x_image = tf.reshape(x, [-1, 28, 28, 1])   w_conv1 = weight_variable([5, 5, 1, 200]) b_conv1 = bias_variable([200]) h_conv1 = tf.nn.relu(conv2d(x_image, w_conv1) + b_conv1)   h_pool1 = max_pool_2x2(h_conv1)   w_conv2 = weight_variable([5, 5, 200, 100]) b_conv2 = bias_variable([100]) h_conv2 = tf.nn.relu(conv2d(h_pool1, w_conv2) + b_conv2)  h_pool2 = max_pool_2x2(h_conv2)    w_fc1 = weight_variable([7 * 7 * 100, 1024]) b_fc1 = bias_variable([1024])  h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*100]) h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, w_fc1) + b_fc1)   keep_prob = tf.placeholder(tf.float32) h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob) w_fc2 = weight_variable([1024, 19]) b_fc2 = bias_variable([19])  y_conv = tf.matmul(h_fc1_drop, w_fc2) + b_fc2 return y_conv, keep_prob 

here fucntion model calls.

def conv2d(x, w): """conv2d returns 2d convolution layer full stride.""" return tf.nn.conv2d(x, w, strides=[1, 1, 1, 1], padding='same')   def max_pool_2x2(x): """max_pool_2x2 downsamples feature map 2x.""" return tf.nn.max_pool(x, ksize=[1, 2, 2, 1],                     strides=[1, 2, 2, 1], padding='same')   def weight_variable(shape): """weight_variable generates weight variable of given shape.""" initial = tf.truncated_normal(shape, stddev=0.1) return tf.variable(initial)   def bias_variable(shape): """bias_variable generates bias variable of given shape.""" initial = tf.constant(0.1, shape=shape) return tf.variable(initial) 

and main

def main(_):  x = tf.placeholder(tf.float32, [none, 784])   y_ = tf.placeholder(tf.float32, [none, 19])  y_conv, keep_prob = deepnn(x)  cross_entropy   tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y_conv)) train_step = tf.train.adamoptimizer(1e-4).minimize(cross_entropy) correct_prediction = tf.equal(tf.argmax(y_conv, 1), tf.argmax(y_, 1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))  tf.session() sess:         sess.run(tf.global_variables_initializer())         in range(34670):             #batch = mnist.train.next_batch(50)             if % 1000 == 0:                 train_accuracy = accuracy.eval(feed_dict={x:     np.reshape(input_to_nn(i),(-1,784)), y_:np.reshape(output_of_nn(i),(-1,19)), keep_prob: 1.0})                 print('step %d, training accuracy %g' % (i, train_accuracy))         train_step.run(feed_dict={x: np.reshape(input_to_nn(i),(-1,784)), y_:np.reshape(output_of_nn(i),(-1,19)), keep_prob: 0.5})          print('test accuracy %g' % accuracy.eval(feed_dict={x:input_nn, y_:output_nn, keep_prob: 1.0})) 

i think problem in these lines:

w_fc2 = weight_variable([1024, 19]) b_fc2 = bias_variable([19]) 

your model trains predict 19 classes. there 10 digit, if don't have images 19 classes, better revert values original 10.


Comments

Popular posts from this blog

Ansible warning on jinja2 braces on when -

Parsing a protocol message from Go by Java -

node.js - Node js - Trying to send POST request, but it is not loading javascript content -