Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About the prompt rules #8

Open
diamond0910 opened this issue Jan 10, 2022 · 10 comments
Open

About the prompt rules #8

diamond0910 opened this issue Jan 10, 2022 · 10 comments

Comments

@diamond0910
Copy link

Great work!

I would like to know what kind of sentences are reasonable and valid for CLIP. Are there any specific prompt rules for style sentences in your paper?

I try the 'an image of a car of wood' for an input car.

init:
image

final:

car

What I want is:
image

But it turned out that the geometry of the car became disorganized and even self-intersecting, the shape of the original geometry was invisible, and the texture did not take on the texture of wood. May I ask where is the problem?

Hope you can answer my two questions: the prompt rules ahd the effect. Thanks you very mush.

Best.

@roibaron
Copy link
Contributor

roibaron commented Jan 10, 2022

Hey @zhouwy19, great question!
Generally speaking, we don't know how the CLIP landscape looks, but repetitive patterns, such as wood, should be the easiest to achieve.

The main issue that I see with your setting is the alignment. You should aim to capture a more meaningful view as your frontview. See subsection 3.3 in the paper (anchor view).

There also could be an issue with the resolution of the mesh you are using, which seems to be slightly insufficient. How many vertices are there? Can you share the .obj file?

I believe that alone should solve your problem, but keep in mind that self intersections are an inherent problem in inverse rendering (and in any pipeline that optimized 3D geometry based on 2D views). The classical solution solution is to introduce a regularization term in the form of Laplacian energies.

Good luck!
Roi

@diamond0910
Copy link
Author

Thank your very much for your quick reply!

I did not find any code about finding the anchor view. I see the frontview_center is set to [0,0] in the code. Do all the obj files you provide have the highest view of the clip aligned to [0,0]?

@diamond0910
Copy link
Author

oh, I see different settings in the shell file, such as '--frontview_center 1.96349 0.6283'. It seems it's different for each mesh?

@roibaron
Copy link
Contributor

You are right, we didn't share a script for finding this view.

The easiest way to set it up is to rotate the mesh with an external editor.

Alternatively, you can iterate views to find a view with high CLIP score, given a prompt like an image of a car.

Roi

@diamond0910
Copy link
Author

Thank you.

I would like to ask, for such a mesh, how should I set the frontview_center to get a picture with a horizontal perspective like the one below?

image

image

And compared with the following, which one is better as an anchor view? The following one shows some information on the car.
image

@roibaron
Copy link
Contributor

My intuition says that a car facing the ground is harder to capture. I would suggest a front facing mesh.

@diamond0910
Copy link
Author

How can I rotate a car facing the ground to a front facing mesh?

@ojmichel
Copy link

In MeshLab you can use the rotation filter. You will want to see the front of the car when looking down the -x axis. Also, this mesh has large triangles which will make the results worse. To fix this, you can use the "Remeshing: Isotropic Explicit Remeshing" filter in MeshLab.

@DaichenWang
Copy link

Great work!

I would like to know what kind of sentences are reasonable and valid for CLIP. Are there any specific prompt rules for style sentences in your paper?

I try the 'an image of a car of wood' for an input car.

init: image

final:

car

What I want is: image

But it turned out that the geometry of the car became disorganized and even self-intersecting, the shape of the original geometry was invisible, and the texture did not take on the texture of wood. May I ask where is the problem?

Hope you can answer my two questions: the prompt rules ahd the effect. Thanks you very mush.

Best.

你好?请问你知道如何导入自己的obj吗?我也在尝试着项目,我用的是kaggle但是好像无法自己导入obj。

@wimmerth
Copy link

You are right, we didn't share a script for finding this view.

The easiest way to set it up is to rotate the mesh with an external editor.

Hey @roibaron,
I would be interested in how you used an external editor to find the view with the highest alignment using CLIP. Is there any editor that has some kind of CLIP plugin that you used? Because in the end you still need to run the inference of the rendered view with your model in Python, isn't it? Did you automatize this process of finding the anchor view? If so, can you share how?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants