Skip to content

Releases: Sanster/IOPaint

IOPaint-1.3.3

06 May 15:13
Compare
Choose a tag to compare

BrushNet and PowerPaintV2 can turn any normal sd1.5 model into an inpainting model.
When using any SD1.5 base model(e.g: runwayml/stable-diffusion-v1-5), the option for BrushNet/PowerPaintV2 will appear in the sidebar. The model will automatically download the first time it is used.

For BrushNet, there are two models to choose from: brushnet_segmentation_mask and brushnet_random_mask.
Using brushnet_segmentation_mask means that the final inpainting result will maintain consistency with the mask shape,
while brushnet_random_mask provides a more general ckpt for random mask shapes.

For PowerPaintV2, just like PowerPaintV1, it was trained with "learnable task prompts" to guide the model in achieving specific tasks more effectively. These tasks include text-guided, shape-guided, object-remove, and outpainting.

image

IOPaint-1.2.2

05 Mar 15:21
Compare
Choose a tag to compare
  • Press and hold Alt/Option and use the mouse wheel to adjust the mouse
  • Fix minor bug in Extender(outpainting)

IOPaint-1.2.0

20 Feb 13:33
Compare
Choose a tag to compare
  • New --interactive-seg-model from Segment Anything in High Quality

    • sam_hq_vit_b
    • sam_hq_vit_l
    • sam_hq_vit_h
  • Automatically expand the input box when prompt input is selected.

prompt_input.mp4

IOPaint-1.1.1

10 Feb 05:36
Compare
Choose a tag to compare
  • Automatically open the browser after the service is started: iopaint start --inbrowser.
image
  • RemoveBG add more model: iopaint start --enable-remove-bg --remove-bg-model briaai/RMBG-1.4
    • u2net
    • u2netp
    • u2net_human_seg
    • u2net_cloth_seg
    • silueta
    • isnet-general-use
    • briaai/RMBG-1.4
  • If a plugin is enabled, the model of the plugin can be switched on the frontend page.
image

IOPaint

03 Feb 14:35
fcd8254
Compare
Choose a tag to compare

You can check the main feature updates of IOPaint in the Beta Release. This release mainly includes:

New Model: AnyText

--mode=Sanster/AnyText

AnyText.mp4

Other

截屏2024-02-03 22 29 23

IOPaint Beta release

05 Jan 13:56
Compare
Choose a tag to compare

IOPaint Beta Release

截屏2024-01-05 17 01 00

It's been a while since the last release of lama-cleaner(now renamed to IOPaint.), partly due to the fact that during this time I have released my own first macOS application OptiClean and started playing Baldur's Gate 3, which has taken up a lot of my free time. Apart from time constraints, another reason is because the code for the project has become increasingly complex, making it difficult to add new features. I was hesitant to make changes to the code, but in the end, I made the decision to completely refactor the front-end and back-end code of the project, hoping that this change will help the project moving forward.

The refactoring includes: switch to Vite, new css framework tailwindcss, new state management library zustand, new UI library shadcn/ui, using more modern python libraries such as fastapi and typer. These refactors were painful and involved significant changes, but I think they were worth it. They made the project structure clearer, easier to add new features, and more accessible for others to contribute code.

Although I am not an AIGC artist or creator, I enjoy developing tools and I wanted to make this project more practical for inpainting and outpainting needs. Below are the new features/models that have been added with this beta release:

New cli command: Batch processing

After pip install iopaint, you can use the iopaint command in the command line. iopaint start is the command for starting the model service and web interface, while iopaint run is used for batch processing. You can use iopaint start --help or iopaint run --help to view the supported arguments.

Better API doc

Thanks to fastapi, now all backend interfaces have clear API documentation. After starting the service with iopaint start, you can access http://localhost:8080/docs to view the API documentation.

image

Support all sd1.5/sd2/sdxl normal or inpainting model from HuggingFace

Previously, lama-cleaner only supported the built-in anything4 and realisticVision1.4 SD inpaint models. iopaint start --model now supports automatically downloading models from HuggingFace, such as Lykon/dreamshaper-8-inpainting. You can search available sd inpaint models on HuggingFace. Not only inpaint models, but you can also specify regular sd models, such as Lykon/dreamshaper-8. The downloaded models can be seen and switched on the front-end page.

截屏2024-01-05 17 01 21

Single file ckpt/safetensors models are also supported. IOPaint will search model under stable_diffusion folder in the --model-dir (default value ~/.cache) directory.

image

Outpainting

You can use Extender to expand images, with optional directions of x, y, and xy. You can use the built-in expansion ratio or adjust the expansion area yourself.

image
outpainting.mp4

Generate mask from segmentation model

RemoveBG and Anime Segmentation are two segmentation models. In previous versions, they could only be used to remove the background of an image. Now, the results from these two models can be used to generate masks for inpainting.

image

Enpand or shrink mask

It is possible to extend or shrink interactive segmentation mask or removebg/anime segmentation mask.

image

More samplers

I have added more samplers based on this Issue A1111 <> Diffusers Scheduler mapping #4167.

image

LCM Lora

https://huggingface.co/docs/diffusers/main/en/using-diffusers/inference_with_lcm_lora

Latent Consistency Models (LCM) enable quality image generation in typically 2-4 steps making it possible to use diffusion models in almost real-time settings. The model will be downloaded on first use.

image

FreeU

https://huggingface.co/docs/diffusers/main/en/using-diffusers/freeu

FreeU is a technique for improving image quality. Different models may require different FreeU-specific hyperparameters, you can find how to adjust these parameters in the original project: https://github.com/ChenyangSi/FreeU

image

New Models

  • MIGAN: Much smaller (27MB) and faster than other inpainting models(LaMa is 200MB), it can also achieve good results.
  • PowerPaint: --model Sanster/PowerPaint-V1-stable-diffusion-inpainting PowerPaint is a stable diffusion model optimized for inpainting/outpainting/remove_object tasks. We can control the model's results by specifying the task.
    image
  • Kandinsky 2.2 inpaint model: --model kandinsky-community/kandinsky-2-2-decoder-inpaint
  • SDXL inpaint model: --model diffusers/stable-diffusion-xl-1.0-inpainting-0.1
  • MobileSAM: --interactive-seg-model mobile_sam Lightweight SAM model, faster and requires fewer resources. Currently, there are many other variations of the SAM model. Feel free to submit a PR!

Other improvement

  • If FileManager is enabled, you can use left or right to switch Image
  • fix icc_profile loss issue(The color of the image changed globally after inpainting)
  • fix exif rotate issue

Window installer user

For 1-click window installer users, first of all, I would like to thank you for your support. The current version of IOPaint is a beta version, and I will update the installation package after the official release of IOPaint.

Maybe a cup of coffee

During the development process, lots of coffee was consumed. If you find my project useful, please consider buying me a cup of coffee. Thank you! ❤️. https://ko-fi.com/Z8Z1CZJGY

1.2.0

06 Jun 14:19
Compare
Choose a tag to compare

Hi everyone, I am planning to create a macOS inpainting app, it would be completely native from UI to model, no JS/Python/PyTorch involved. The app will be small and has better local file support e.g. directly modify a local image file. Here is a demo video, the model runs MUCH faster on the M2 GPU than using pytorch(mps device).

mac_app2.mp4

If you are interested, please fill out this form with your email address. I will send an email notification when the app is released and provide a 50% discount code for you. Thank you.

1.2.0 Update

Better ControlNet support in Stable Diffusion

https://lama-cleaner-docs.vercel.app/models/controlnet

  • --sd-controlnet: enable controlnet in stable diffusion
  • --sd-controlnet-method: set controlnet method used, method can be change in web ui
    • control_v11p_sd15_canny
    • control_v11p_sd15_openpose
    • control_v11p_sd15_inpaint
    • control_v11f1p_sd15_depth

New plugin

Anime Segmentation: --enable-anime-seg

image

New icon/logo

Other improvement

  • fix exif issue: #299

  • remove scikit-image to make install eaiser for python3.11

  • Use new font: Inter

  • Show Stable Diffusion inpainting progress:

    sd_progress.mp4
  • Show prev mask

    show_prev_mask.mp4

1.1.1

06 Apr 15:01
Compare
Choose a tag to compare

Use Segment Anything model to do interactive segmentation. See demo here: https://twitter.com/sfjccz/status/1643992289294057472?s=20

--enable-interactive-seg --interactive-seg-model=vit_l  --interactive-seg-device=cuda
  • Available:
    • vit_b: small
    • vit_l: mid (Recommend)
    • vit_h: large
  • Available device:
    • cuda
    • cpu
    • mps

1.0.0

01 Apr 12:59
Compare
Choose a tag to compare

This version contains a lot of features, so I set the version number to 1.0. I hope these updates will help you in your work.

Plugins

plugins_demo_720.-.720WebShareName.mov

In the post-processing of image cleaning, in addition to erasing, algorithms such as facial repair or super-resolution are often used. Now you can directly use them in Lama Cleaner. See the Plugins Doc for how to use it.

Other Features

  • Stable Diffusion ControlNet Inpainting: thanks for https://github.com/mikonvergence/ControlNetInpaint, now you can use ControlNet inpainting when using sd1.5 model. This can make your inpainting results more consistent with the original structure. Run lama-cleaner with--sd-controlnet to enable it.
  • Load Stable Diffusion 1.5 model(ckpt/safetensors) from local path: Run lama-cleaner with--model sd.15 --sd-local-model-path /path/to/your/local/inpainting_model.ckpt to enable it. You can learn how to create a inpainting in AUTO1111's webui here
  • MAT model vRAM usage improvement: Now defaulting to using fp16 format, which use less vRAM and run faster.
  • Better FileManager: implement some improve suggestion mentioned here

0.37.0

01 Mar 13:58
Compare
Choose a tag to compare

New Stable Diffusion inpainting model, you can choose different models for different scenarios

Original sd1.5 anything4
image image image