Gen AI Virtual Try-On API for Apparel: Integration Guide for Teams

Written by WEARFITS Team | Apr 23, 2026 5:29:59 AM

AI virtual try-on for apparel can make online shopping feel much more personal. Instead of asking shoppers to imagine how a garment might look on their body, it gives them a visual result based on their own photos or sizing data. That can make product pages more engaging, help customers feel more confident, and create a smoother path from browsing to purchase.

This article turns the WEARFITS Integration Guide and API Reference into a simpler, easier-to-read overview. The goal is not to repeat the documentation line by line. The goal is to explain, in plain English, how the system works, what your integration options are, and what matters most if you want to roll out AI virtual try-on on an e-commerce website.

What WEARFITS Actually Does

WEARFITS uses AI to create a digital twin of the shopper and then render garments onto that twin. In practice, the experience works in two steps.

First, the platform creates a digital twin. This is a reusable model of the shopper based on a face photo plus either a full-body photo, a silhouette image, or clothing size. Then, once the twin exists, the system runs a virtual fitting and shows garments on that person with a more realistic sense of fit, drape, and overall appearance.

This matters because the setup work happens once, while the same shopper can try many products afterward. That is a much better e-commerce flow than rebuilding the entire user model for every single item.

You can contact us to learn more.

The Three Main Integration Options

The Integration Guide describes three ways to put WEARFITS into a store:

  • Iframe embedding: put the try-on app inside your website and communicate with it using postMessage.
  • JavaScript modal: open the try-on app as a modal overlay when a shopper clicks a button.
  • Standalone page: send the shopper to a dedicated try-on page with URL parameters and preloaded data.

For most teams, iframe embedding is the easiest place to start. It gives you a controlled integration without forcing you to build the full try-on frontend from scratch. If you want even more control, the API is there as a second path.

The Simple Version: How the User Journey Works

  1. The shopper opens the try-on flow from your store.
  2. They upload a selfie and a full-body photo, or you send another supported input mode.
  3. WEARFITS creates a digital twin.
  4. The shopper chooses a top, bottom, full-body garment, or shoes.
  5. WEARFITS runs the virtual fitting.
  6. You show the final output image inside your site or store it for later use.

That is the whole system at a high level. The rest is about choosing the right integration method and sending the right data.

Why Iframe Embedding Is a Good First Choice

The hosted app can be embedded with a simple iframe. This is the route recommended in the Integration Guide when you want a practical rollout without building everything yourself.

<iframe
id="wearfits-frame"
src="https://tryon.wearfits.com"
style="width: 100%; height: 100vh; border: none;"
allow="camera" >
</iframe>

That last part, allow="camera", is important. Camera or photo access is needed for digital twin creation, so mobile and desktop permissions need to be handled properly from the start.

Once the iframe is loaded, your site can listen for events such as:

  • WEARFITS_READY
  • WEARFITS_INIT
  • WEARFITS_SET_PRODUCTS
  • WEARFITS_COMPLETE
  • WEARFITS_ERROR
  • WEARFITS_CLOSE

In simple terms, this means your storefront can know when the app is ready, pass products into it, respond when the try-on is finished, and handle errors or close actions without losing control of the customer experience.

The Product Data Format Is Simple

The hosted integration uses a product JSON format. Each item needs a product ID, a display name, a category, and one or two images. A default flag can also pre-select a product when the app opens.

{
"products": [
{
"id": "product-123",
"name": "Blue Cotton T-Shirt",
"category": "top",
"images": [
"https://example.com/products/tshirt-lifestyle.jpg",
"https://example.com/products/tshirt-packshot.jpg"
],
"default": true
}
]
}

Supported categories in the guide are straightforward:

  • top
  • bottom
  • fullBody
  • shoes

The first image is typically used for the result preview, while the second can be used in the selection carousel. If you only provide one image, it is reused for both roles.

The JavaScript modal

he JavaScript modal option is especially useful on product detail pages where you want a clear Try On button without embedding the full experience into the layout all the time. In the modal flow, you prepare product data in your frontend from backend data, store it in window.__WEARFITS_CONFIG__, define handlers like onComplete, onError, and onClose, set window.__WEARFITS_MODAL_MODE__ = true, create a container element, and then load https://tryon.wearfits.com/embed/modal.js to mount the overlay. This makes the integration feel lightweight on the page while still giving your store direct control over what happens after a try-on finishes, fails, or gets closed.

What the API Adds

If iframe or modal integration is the easy route, the API is the flexible route. The API Reference shows that you can handle the full pipeline yourself: create a digital twin, run virtual fitting, poll job status, download the result, and reuse the same twin across future try-ons.

All API calls use the production base URL:

https://api.wearfits.com

And all protected endpoints require an API key sent in the header:

X-API-Key: your_api_key_here
Plain-English takeaway: if your team wants the fastest rollout, use the hosted app. If your team wants tighter control over the whole flow, use the API.

 

The Core API Flow in Plain English

The easiest way to understand the API is to think of it as a simple sequence:

  1. Create or reuse a digital twin.
  2. Submit a virtual fitting job.
  3. Poll the job endpoint until it finishes.
  4. Read the result image URL.

The API offers both a two-step flow and a one-request flow. If you want the simplest entry point, the API Reference recommends POST /api/v1/virtual-fitting, because it can create the twin and apply garments in one request. If you want more control, you can first call POST /api/v1/digital-twin and then reuse the returned digitalTwinId later.

Endpoint 1: Create a Digital Twin

The dedicated digital twin endpoint is:

POST /api/v1/digital-twin

The documentation describes four practical input modes across the digital twin and virtual fitting flows:

  • Photo mode: face image plus full-body photo
  • Direct mode: face image plus pre-rendered silhouette image
  • Measurements mode: face image plus body measurements
  • Clothing size mode: face image plus height and clothing size

For most normal e-commerce use cases, photo mode is the most natural because it works with regular customer photos. The body photo must show the full person from head to toe. If you only have size information, measurements mode and clothing size mode are also supported, which is helpful for teams building lighter or lower-friction flows.

The API also supports poses such as default, girl_pose, man_pose, shoe_girl_pose, and standing_arms_down. A useful detail from the API Reference is that pose is part of the cached twin. In other words, the same person in a different pose becomes a different cached twin.

Endpoint 2: Run Virtual Fitting

The recommended API endpoint is:

POST /api/v1/virtual-fitting

This endpoint can either create a twin as part of the request or reuse a saved one. That makes it flexible for both first-time and repeat try-ons.

The request can include garment slots such as:

  • topGarment
  • bottomGarment
  • fullBodyGarment
  • shoes

Each garment field accepts either one image or an array of one or two images. The API Reference explains that the array should be ordered like this:

  • [0] = packshot
  • [1] = optional on-model reference image

That on-model reference is useful because it gives the AI extra context about fit and drape, which can improve results for more complex garments.

{
"digitalTwinId": "abc123...",
"topGarment": [
"https://example.com/top-packshot.jpg",
"https://example.com/top-on-model.jpg"
],
"bottomGarment": "https://example.com/pants.jpg"
}

Polling the Job Result

Both digital twin creation and virtual fitting are asynchronous jobs. That means the first response usually gives you a job ID, estimated processing time, and a status URL. Then you check the job until it reaches completed or failed.

The polling endpoint is:

GET /api/v1/jobs/{jobId}

According to the API Reference, job statuses can include:

  • queued
  • validating
  • processing
  • uploading
  • completed
  • failed

When the job is done, the response can include result URLs and, for relevant flows, a digitalTwinId. There is also a DELETE /api/v1/jobs/{jobId} endpoint for cancelling jobs that are still early enough in the pipeline.

Why the Digital Twin Is the Key Concept

The single most important idea in the API is the digitalTwinId. Once you have it, you do not need to rebuild the shopper model every time. That means future try-ons can be faster, simpler, and more consistent.

The API Reference says digital twins are cached for 30 days, while try-on results are cached for 7 days. This is a very practical design for e-commerce. It lets a customer try many products over time without repeating the full setup process, while also making common repeat requests much faster.

If needed, the docs also mention options like skipCache and skipResultCache to force fresh generation.

What Counts as a Good First Integration

If your team wants a practical first rollout, a good plan would be:

  1. Start with iframe embedding or the modal flow.
  2. Use clean product images with one packshot and, when possible, one on-model reference.
  3. Let shoppers create a digital twin once.
  4. Save the digitalTwinId for later sessions.
  5. Move to direct API integration only if you need more control.

This staged approach is usually better than over-engineering from day one. It lets you learn how customers use the feature before investing in a deeper custom build.

Best Practices That Matter in Real Stores

The Integration Guide and API Reference together point to a few practical best practices.

Use good images. Product photos should be clear, well lit, and preferably at least 500 by 500 pixels. HTTPS image URLs are the safer default. When possible, include both a packshot and an on-model reference.

Validate origins in production. If you use iframe embedding, do not leave * in your postMessage logic. Restrict allowed origins to the real WEARFITS domain.

Design for mobile. Camera access, photo upload, and loading states matter even more on phones. Do not assume a technically working mobile flow is automatically a good mobile experience.

Use caching on purpose. The cache is not just a backend optimization. It is part of the product experience. Reusing a saved twin makes the system feel faster and more polished.

Handle failure clearly. The Integration Guide includes error codes such as TWIN_CREATION_FAILED, FITTING_FAILED, INVALID_IMAGE, NETWORK_ERROR, and TIMEOUT. Users should see plain-language feedback, not silent failures.

Rate Limits, Webhooks, and Operational Details

The API Reference also includes a few details that matter once traffic grows:

  • Job submission: 50 requests per minute
  • Job status checks: 500 requests per minute
  • File downloads: 300 requests per minute

The docs also mention rate limit headers: X-RateLimit-Limit, X-RateLimit-Remaining, and X-RateLimit-Reset. That is useful if you want cleaner control in production or plan to build queueing on your side.

Webhooks are supported too. You can pass a webhook URL when you submit a request and get notified when jobs complete. The API Reference explains that webhook signatures use HMAC-SHA256 through the X-Webhook-Signature header. That gives engineering teams a more reliable alternative to constant polling when they want a more event-driven workflow.

Health Checks and Monitoring

The API also includes a health endpoint:

GET /health

This is not the most glamorous part of the integration, but it is useful for monitoring, dashboards, and deployment checks. For teams running server-to-server integrations, it gives one more small layer of operational confidence.

When to Use the Hosted App and When to Use the API

If you want the shortest path to launch, use the hosted app through iframe, modal, or standalone page integration. If you want deep control over your own frontend, your own backend orchestration, or your own shopper flow, use the API.

There is no need to treat this as an all-or-nothing choice. Many brands will do best with a phased approach: hosted integration first, deeper API work second.

Final Takeaway

WEARFITS is not just a visual widget. It is a structured system built around reusable digital twins, job-based processing, product image inputs, and flexible integration choices. The simplest way to think about it is this: the shopper creates a twin once, tries many products afterward, and your store can either embed the experience quickly or control it directly through the API.

If you are planning AI virtual try-on for e-commerce, the strongest strategy is to keep the experience easy for shoppers and the architecture reusable for your team. That is where WEARFITS looks most practical: simple entry points for fast rollout, plus enough API depth for more advanced product and engineering teams later on.

To learn more about how to integrate WEARFITS AI Apparel Try-On into your store visit our Integration Guide and API Documentation. You can also contact us if you want to get more information about our Gen AI Try-On.

Frequently Asked Questions

What is the difference between the WEARFITS web modules and the headless API?

The web modules are pre-built UI components (iframe embed and JavaScript modal) that handle the full try-on experience end-to-end, including the camera flow, garment selection, and result rendering. Integration usually takes one to two developer-days. The headless REST API exposes the same try-on capability without any UI — you build the experience yourself using the create-digital-twin and run-virtual-fitting endpoints. Web modules are the fast path; the headless API is for teams that need full UI control or a custom architecture.

How long does WEARFITS Gen AI virtual try-on for apparel take to integrate?

Iframe embedding takes a few hours for a working integration on a product page. The JavaScript modal takes around the same. The headless REST API integration typically takes one to two developer-days for a working prototype, and a week or two for a production-grade implementation with proper error handling, webhooks, and operational monitoring. The playground requires no integration and no API key — you can test in minutes.

Do I need a custom 3D pipeline to use the AI virtual try-on API?

No. WEARFITS Gen AI virtual try-on for apparel generates the digital twin automatically from your existing product photos. You do not need CAD files, 3D modelers, or a per-garment 3D production pipeline. The same API that powers shoes virtual try-on also handles apparel garments — you upload the photo, the AI builds the twin, and you reference it by ID in subsequent try-on calls.

How does authentication work for the WEARFITS try-on API?

Authentication uses an X-API-Key header on every request. API keys are issued instantly when you sign up for a paid plan, and can be rotated from the dashboard. There is no OAuth dance and no per-user authentication required — the API is server-to-server, with the client-side experience handled by the web modules or your own UI built on top of the headless endpoints.

Are virtual try-on API calls synchronous or asynchronous?

The create-digital-twin endpoint returns a jobId immediately and processes the twin asynchronously. You can either poll the job result endpoint or register a webhook to receive a callback when the twin is ready. The run-virtual-fitting endpoint also returns asynchronously for the same reason — rendering a realistic Gen AI try-on result takes a few seconds, so async patterns are the right architecture for production traffic.

What happens if a garment image fails the digital-twin step?

The job result endpoint returns a structured error with a reason code (image too small, occluded garment, unsupported category, etc.). The API returns errors as standard HTTP status codes with a JSON body describing the failure, so you can surface a clean message to your merchandiser or trigger a retry with a better source image. Most failures are caused by low-resolution packshots or photos with the garment partially out of frame.

Can I use the same API for shoes try-on and apparel?

Yes. The same WEARFITS API surface handles virtual try-on for shoes, bags, and apparel categories. The garment category is specified when you create the digital twin. Shoes virtual try-on uses foot-tracking models on the customer side; apparel try-on uses Gen AI rendering on a body silhouette. You only need one API integration to cover all three categories.

What are the rate limits and what happens at scale?

Rate limits are tier-based and visible in the response headers (X-RateLimit-Limit, X-RateLimit-Remaining). For high-traffic stores, the recommended pattern is to pre-generate digital twins for the entire catalog in batch ahead of launch, then call run-virtual-fitting at user request time — that way you are not creating twins on the live shopper path. Talk to the team if you expect sustained throughput above the default tier.

How do I monitor a production WEARFITS integration?

WEARFITS exposes a public /health endpoint for synthetic monitoring, and the job result endpoint returns timing metadata you can pipe into your existing observability stack. For Shopify and headless storefronts, the recommended pattern is to track try-on starts, completions, and failure rates as events — alongside conversion lift on product pages where try-on is enabled.