Not a model to print.
This is a public service announcement or a food for thought contribution. It is possible to use Stable Diffusion to take low quality, low resolution, low detail UV maps, and make them high resolution using Image to Image options in Stable Diffusion. This is a first run without any tweaking… I am sure I could have achieved much better results with more pruned and concise prompt engineering. This is a game changer for me.
-did not use controlnet.
Denoising .3 to .35.
v1-5-pruned-emaonly.safetensors [6ce0161689]
refiner - switch at .5 absolutereality_v181.safetensors [463d6a9fe8]
Image to Image Prompt: crisp details, HDR, sharp lines, high quality, in focus, high contrast, masterpiece, sharp details, details, follow patterns, sharpen, <lora:add_detail:1>
negative prompt: blurry, out of focus, bad quality, cartoon, fuzzy, blurred.
Note difference in uv map pics included. 1st UV map pic is the original. Again I used my first run… no refinement or tuning. I just guessed at my settings and they worked out. I imagine I could get much better results… but these alone floored me.
Oh- and I tested both an x1 and a x4 ultra sharp upscaled images. They both worked without having to resize or remap to the model… so they worked out of the box.
The author marked this model as their own original creation.