Asynchronous gradient descent in tensorflow under concurrent access -


i have been having trouble implementing asynchronous gradient descent in multithreaded environment.

to describe skeleton of code, each thread,

loop     synchronize global param     < work / accumulate gradients on mini-batch >     apply gradient descent global network, specifically,     = self.optimizer.apply_gradients(grads_and_vars) end 

where each thread has own optimizer.

now problem that, in defining optimizer 'use_locking=false', not work, evidenced rewards generated reinforcement learning agent.

however, when set 'use_locking=true', works algorithm correct; it's local gradients not applied global param.

so possible reasons thought of following: 1. while 1 thread updating global param, when thread accesses global param, former thread cancels remaining updates. , many threads access global param concurrently, threads hard work nothing. 2. referring to, how asynchronous training work in distributed tensorflow?, reading asynchronously fine in top of loop. however, may thread done applying gradient, goes synchronizing global param not fetch updates other threads.

can you, tensorflow developer, me happening 'use_locking' specific loop instance?

i have been spending days on simple example. although setting use_locking = true solve issue, not asynchronous in nature , slow.

i appreciate help.


Comments

Popular posts from this blog

node.js - Node js - Trying to send POST request, but it is not loading javascript content -

javascript - Replicate keyboard event with html button -

javascript - Web audio api 5.1 surround example not working in firefox -