Skip to content

How to do optimzation in pserver #1242

Open
@QiJune

Description

@QiJune

In TensorFlow 1.x, or similar computation graph based deep learning framework, we could serialize the model into a protobuf file. And then, we can split the file into two parts, one for worker, the other for pserver. Worker runs forward/backward part, pserver runs optimize part.

However, in TensorFlow 2.0, there is no computation graph, and no serializable model. It's just python program. So how could pserver know how to do optimization?

Besides, a trainable variable not only has its value, but also has many attributes, such as initializer/regularizer/constraint. These attributes are useful when doing optimization.

One solution is that we send the tensor value and also attributes to pserver by grpc. Then, we create a new tf.Variable on-the-fly. So we could call optimizer.apply_gradients to update the value.

But this solution has big expense:

  • set value and attributes in protobuf message and send out every time. Value is changing, but attributes are unchanging. It's a kind of waste.
  • construct a new variable every time. I am not sure if we could deserialize from a protobuf message directly to construct a new variable to avoid some memory copy work.

Let's discuss and find an efficient solution.

Metadata

Metadata

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions