Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to get the uv information on my own example #7

Open
liaochenchieh opened this issue Sep 15, 2024 · 11 comments
Open

How to get the uv information on my own example #7

liaochenchieh opened this issue Sep 15, 2024 · 11 comments

Comments

@liaochenchieh
Copy link

Hi, thanks a lot for sharing this wonderful project!
I have tried out my single-image example with estimated SMPL obj files from ECON.
Now, when I want to use some visualization tool (I am using Blender) to see the generated results, I find it hard to apply the partial_tex texture image to the smplx_d2/dmplx_star object, mainly because the object does not contain the UV information.
I am pretty new to this area and don't know how to handle the UV part well. Could you provide some tips for solving this problem?
Thank you very much!

@ZhanxyR
Copy link
Owner

ZhanxyR commented Sep 16, 2024

Hi, you can load the obj file and rewrite it with save_mtl to make it contain the UV information. Or you can use any other method to replace the vts and faces of the .obj file. The UV information can also be found in the partial_colored.obj.

@liaochenchieh
Copy link
Author

@ZhanxyR Hi, I just tried out the save_mtl function, and it is exactly what I need for the UV mapping! Thanks a lot again for the solution!

@ZhanxyR
Copy link
Owner

ZhanxyR commented Sep 16, 2024

You're welcome, and I'll close this issue. If you have any other questions, feel free to ask.

@ZhanxyR ZhanxyR closed this as completed Sep 16, 2024
@liaochenchieh
Copy link
Author

@ZhanxyR Hi, thanks for the previous solution. I am currently having an issue regarding exporting FBX / rigging the SMPL model using the estimated results we have. I previously tried the official smpl-x blender add-on, but I think it cannot import the model from my parameters.
I know it might not be the focus of this project, but would you happen to have any insight about how we can generate a rigged model (for example, FBX that can be animated in applications like VR), given the estimated parameters we have in this project?

@ZhanxyR
Copy link
Owner

ZhanxyR commented Sep 17, 2024

Unfortunately, I was not familiar with fbx file structure. :(
But we use this function to subdivide the original SMPLX model and remove the eye balls. Thus it changed from (vertices - 10475, faces - 20908) to (vertices - 149921, faces - 299712).
We also regenerate the skinning weights, which is saved at data/skinning_weights/lbs_weights_divide2.npy.
This issue will be reopened and I hope it can help you.

@ZhanxyR ZhanxyR reopened this Sep 17, 2024
@liaochenchieh
Copy link
Author

liaochenchieh commented Sep 27, 2024

@ZhanxyR Hi, thanks for reopening the issue.
Since rigging a new object is relatively hard, I am now considering using the original SMPL or SMPL-X (which is rigged), which is easy to control in real-time for my application.
Therefore, my goal is to generate a color texture map that can fit the SMPL (or SMPL-X) topology.

I think that some parts of our pipeline may help. Would you have any advice on this? Or would you know if there is any existing method I can look into?
I'm looking forward to talking with you more. Thank you!

@liaochenchieh
Copy link
Author

Would it be possible to run your pipeline without subdividing the SMPLX model? (Though it seems that many parts of the code should be changed)

@ZhanxyR
Copy link
Owner

ZhanxyR commented Sep 28, 2024

Apologies for the delay in my response. If you just hope to obtain a SMPLX-based result without subdivision, there is a very simple way. As shown in Fig.S15 in our SupMat. , you can extract only the first 9383 points and the first 18732 triangle faces from the subdivided mesh file to get a SMPLX mesh (but without the eyeballs, which should be manual or adaptively added).

@ZhanxyR
Copy link
Owner

ZhanxyR commented Sep 28, 2024

  import numpy as np


  def load_obj(file):
      verts = []
      vts = []
      faces = []
      with open(file) as f:
          while True:
              line = f.readline()
              if not line:
                  break
              strs = line.split(" ")
              if strs[0] == "v":
                  verts.append((float(strs[1]), float(strs[2]), float(strs[3])))
              elif strs[0] == "vt":
                  vts.append((float(strs[1]), float(strs[2])))
              elif strs[0] == 'f':
                  faces.append([[int(s) for s in strs[1].split("/")], [int(s) for s in strs[2].split("/")],
                              [int(s) for s in strs[3].split("/")]])
              else:
                  continue
      return np.array(verts, dtype=float), np.array(vts, dtype=float), np.array(faces, dtype=int)


  def save_obj(verts, faces, path_out, single=False, vts=None, colors=None):
      with open(path_out, 'w') as fp:

          fp.write('mtllib material.mtl\nusemtl material\n')

          if colors is not None:
              for i in range(len(verts)):
                  vi_np = np.array(verts[i])
                  color_np = np.array(colors[i])
                  # fp.write('v %f %f %f\n' % (vi_np[0], vi_np[1], vi_np[2]))
                  fp.write('v %f %f %f %f %f %f\n' % (vi_np[0], vi_np[1], vi_np[2], color_np[0], color_np[1], color_np[2]))
          else:
              for vi in verts:
                  vi_np = np.array(vi)
                  fp.write('v %f %f %f\n' % (vi_np[0], vi_np[1], vi_np[2]))

          if vts is not None:
              for vt in vts:
                  vt_np = np.array(vt)
                  fp.write("vt %f %f\n" % (vt_np[0], vt_np[1]))

          for fi in faces:
              ft = np.array(fi)
              if not single:
                  fp.write('f %d/%d %d/%d %d/%d\n' % (ft[0][0], ft[0][1], ft[1][0], ft[1][1], ft[2][0], ft[2][1]))
              else:
                  if len(ft.shape) == 2:
                      ft = ft[..., 0]
                  fp.write('f %d %d %d\n' % (ft[0], ft[1], ft[2]))



  def downsample(mesh_path, level=1):

      verts, _, faces = load_obj(mesh_path)

      info = np.load('downsample_level1.npz', allow_pickle=True)

      verts = verts[:info['verts_num']]

      save_obj(verts, info['faces'], 'test_1.obj', vts=info['vts'])

      info = np.load('downsample_level2.npz', allow_pickle=True)

      verts = verts[:info['verts_num']]

      save_obj(verts, info['faces'], 'test_2.obj', vts=info['vts'])





  if __name__ == '__main__':

      # down_level_1_path = 'old/down_1.obj'
      # verts, vts, faces = load_obj(down_level_1_path)
      # np.savez('downsample_level1.npz', verts_num=len(verts), vts=vts, faces=faces)
      # print(vts.shape, faces.shape) # (56196, 2) (18732, 3, 2)
      # exit()

      # down_level_2_path = 'old/down_0.obj'
      # verts, vts, faces = load_obj(down_level_2_path)
      # np.savez('downsample_level2.npz', verts_num=len(verts), vts=vts, faces=faces)


      mesh_path = 'old/0455_smooth_smplx.obj'

      downsample(mesh_path)

@liaochenchieh This is a simple script I have tested. The required .npz file can be download here
You can check this function to know about how we deal with the eyeballs.

And one step we did not shown in this repo is to use Real-ESRGAN to SR the projected texture result, which has a great improvement, especially in the human face.

L(after), R(before)
image

@liaochenchieh
Copy link
Author

@ZhanxyR Thanks for the reply!
Does it mean I should create a SMPL-X based texture map by using your downsampling method and replace the mesh_path in the texture projection function with the downsampled mesh?

@ZhanxyR
Copy link
Owner

ZhanxyR commented Sep 28, 2024 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants