Skip to content

Conversation

@Olyno
Copy link
Contributor

@Olyno Olyno commented Oct 23, 2025

Description

This pull request introduces an interface to allow the users to update their LLM URLs inside Dyad.

Closes #816

image

Summary by cubic

Add settings to edit local LLL endpoints (Ollama and LM Studio) and route all local model traffic through these values. Users can now point Dyad to remote or custom hosts while keeping sensible defaults.

  • New Features
    • Provider page UI to view/update/reset Ollama and LM Studio endpoints.
    • Persist endpoints in UserSettings with localhost defaults.
    • LM Studio URL normalization: trim, add protocol, default port 1234, strip trailing /v1 and trailing slashes, IPv6-safe.
    • Ollama host parsing: trim, add protocol, default port 11434, IPv6-safe; use OLLAMA_HOST env if set, else settings.
    • Handlers and clients read settings for API base URLs (Ollama tags, LM Studio models, OpenAI-compatible provider); provider setup reflects local endpoints.
    • Added defaults as constants; tests for URL parsing/normalization and endpoint persistence; clearer connection errors.

Written for commit 7460516. Summary will update on new commits.


Note

Introduces configurable local model endpoints with persistence and consistent usage across the app.

  • New LocalModelEndpointSettings UI on provider pages to view/update/reset ollama and lmstudio endpoints
  • Extend UserSettings with ollamaEndpoint and lmStudioEndpoint and default constants in constants/localModels
  • IPC: local_model_ollama_handler reads from env or settings with improved parseOllamaHost; local_model_lmstudio_handler uses getLMStudioBaseUrl; clearer connection errors
  • Utils: add lm_studio_utils for base URL normalization (protocol/port/trailing /v1 handling, IPv6-safe); get_model_client uses settings-derived base URLs for Ollama and LM Studio
  • UI/State: ProviderSettingsPage renders endpoint settings for local providers; ProviderSettings shows all providers; useLanguageModelProviders marks local providers as configured when endpoints are set
  • Tests: e2e for settings persistence; unit tests for LM Studio URL normalization, Ollama host parsing, and updated settings defaults

Written by Cursor Bugbot for commit 7460516. This will update automatically on new commits. Configure here.

Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

3 issues found across 11 files

Prompt for AI agents (all 3 issues)

Understand the root cause of the following 3 issues and fix them.


<file name="src/ipc/utils/lm_studio_utils.ts">

<violation number="1" location="src/ipc/utils/lm_studio_utils.ts:8">
If the user enters an LM Studio URL with http/https but no port (for example `http://localhost`), this branch returns it unchanged so the default 1234 port is never added. The resulting base URL points to port 80, causing LM Studio requests to fail. Please ensure the default port is appended even when a protocol is present but no explicit port was provided.</violation>
</file>

<file name="src/__tests__/normalizeLmStudioBaseUrl.test.ts">

<violation number="1" location="src/__tests__/normalizeLmStudioBaseUrl.test.ts:43">
This test overrides LM_STUDIO_BASE_URL_FOR_TESTING but does not restore the previous value, so any existing setting is lost for later tests. Please capture the original value and restore it in a finally block or after the assertion to keep tests isolated.</violation>
</file>

<file name="src/components/LocalModelEndpointSettings.tsx">

<violation number="1" location="src/components/LocalModelEndpointSettings.tsx:99">
Using a single shared saving flag, this finally block resets it to null even when another save request is still running. If the other endpoint is saved before this call completes, its button becomes re-enabled and can resubmit while the request is still in flight. Guard the reset so only the request that set the flag clears it.</violation>
</file>

React with 👍 or 👎 to teach cubic. Mention @cubic-dev-ai to give feedback, ask questions, or re-run the review.

Copy link
Collaborator

@wwwillchen wwwillchen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the pull request.

A couple of high-level thoughts:

  • Given that dyad is currently using the OLLAMA_HOST env var for ollama, is there a need to configure the ollama URL inside dyad? I'm not an expert in ollama, but it seems not super necessary to configure the ollama URL assuming we're picking up the env var correctly. For LM Studio, I could see this being useful as I'm not aware of any similar env var for it.
  • UX-wise: I think we should configure this in the Model Providers section and not in the AI section. Similar to how each of the cloud providers have their own dedicated page, we should display the local model providers in this grid and then allow users to navigate to the provider-specific settings page and configure the URL:
image

@elsung
Copy link

elsung commented Oct 30, 2025

hm not sure if there are other folks like me who use ollama + tailscale, but for me theres a definite need so that i can use dyad on my laptop when im out and about to connect to my remote ollama instance

Thanks for the pull request.

A couple of high-level thoughts:

  • Given that dyad is currently using the OLLAMA_HOST env var for ollama, is there a need to configure the ollama URL inside dyad? I'm not an expert in ollama, but it seems not super necessary to configure the ollama URL assuming we're picking up the env var correctly. For LM Studio, I could see this being useful as I'm not aware of any similar env var for it.
  • UX-wise: I think we should configure this in the Model Providers section and not in the AI section. Similar to how each of the cloud providers have their own dedicated page, we should display the local model providers in this grid and then allow users to navigate to the provider-specific settings page and configure the URL:
image

@Olyno
Copy link
Contributor Author

Olyno commented Nov 4, 2025

Given that dyad is currently using the OLLAMA_HOST env var for ollama, is there a need to configure the ollama URL inside dyad? I'm not an expert in ollama, but it seems not super necessary to configure the ollama URL assuming we're picking up the env var correctly. For LM Studio, I could see this being useful as I'm not aware of any similar env var for it.

As @elsung mentioned, the primary use-case here is to allow people to have remote instances, and not having to reload each time their Dyad instance whenever they need to change their Ollama instance url. I agree it shouldn't happen a lot, but this is nice to have, to be able to change the url directly from the dashboard.

UX-wise: I think we should configure this in the Model Providers section and not in the AI section. Similar to how each of the cloud providers have their own dedicated page, we should display the local model providers in this grid and then allow users to navigate to the provider-specific settings page and configure the URL:

Got it, working on it!

@Olyno
Copy link
Contributor Author

Olyno commented Dec 20, 2025

Sorry for the delay! Would something like that work? @wwwillchen

image

@wwwillchen
Copy link
Collaborator

@Olyno no worries. I think the input should be inside the provider details page for Ollama and LM Studio respectively similar to how all the other providers work

@Olyno
Copy link
Contributor Author

Olyno commented Dec 24, 2025

@wwwillchen Sounds good. Is something like that looking good, or would you prefer something else?

image image image

@wwwillchen
Copy link
Collaborator

@Olyno looks good to me, thanks

@Olyno
Copy link
Contributor Author

Olyno commented Jan 2, 2026

Changes applied @wwwillchen

🎉 🥳 Happy new year 2026 🥳 🎉

Let me know if there is anything which would require changes

Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

2 issues found across 15 files

Prompt for AI agents (all issues)

Check if these issues are valid — if so, understand the root cause of each and fix them.


<file name="src/ipc/handlers/local_model_ollama_handler.ts">

<violation number="1" location="src/ipc/handlers/local_model_ollama_handler.ts:17">
P2: Duplicate unreachable code: The second `if (!host)` check is dead code since the function already returns at the first identical check. This block should be removed.</violation>
</file>

<file name="src/components/LocalModelEndpointSettings.tsx">

<violation number="1" location="src/components/LocalModelEndpointSettings.tsx:191">
P2: Missing `key` prop in `.map()` - React will emit a warning and may have rendering issues when the list changes. Wrap the rendered element with a Fragment containing a key.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

@greptile-apps
Copy link
Contributor

greptile-apps bot commented Jan 2, 2026

Greptile Summary

This PR adds UI settings for customizing Ollama and LM Studio endpoint URLs, replacing hardcoded localhost values with user-configurable endpoints stored in UserSettings.

Key Changes:

  • Added ollamaEndpoint and lmStudioEndpoint fields to UserSettings schema with default localhost values
  • Created LocalModelEndpointSettings component for viewing/editing endpoint URLs with save/reset functionality
  • Updated getOllamaApiUrl() and getLMStudioBaseUrl() to read from settings (with env var overrides for testing)
  • Implemented URL normalization for LM Studio (protocol/port defaults, /v1 stripping, IPv6 support)
  • Enhanced parseOllamaHost() with whitespace trimming and improved IPv6 handling
  • Updated provider grid to display local providers (Ollama, LM Studio) alongside cloud providers
  • Added comprehensive unit tests and E2E test for endpoint persistence
  • Improved error messages to show actual endpoint URL when connection fails

Implementation Quality:

  • Follows existing IPC patterns correctly (settings updated via useSettings hook)
  • URL parsing logic is well-tested with edge cases covered
  • Maintains backward compatibility through sensible defaults
  • Proper separation of concerns between UI, hooks, and handlers
  • Env var override preserved for testing scenarios

Confidence Score: 4/5

  • Safe to merge with minor style improvements available
  • Well-implemented feature with comprehensive tests and proper error handling. The two style comments about redundant state updates are minor and don't affect functionality. Code follows existing patterns, handles edge cases (IPv6, whitespace, empty values), and maintains backward compatibility. The URL normalization logic is thoroughly tested.
  • Pay attention to src/components/LocalModelEndpointSettings.tsx which has redundant state updates (lines 93-98, 120-125) that can be simplified

Important Files Changed

Filename Overview
src/ipc/utils/lm_studio_utils.ts Replaced hardcoded URL with dynamic endpoint resolution; added normalization logic for LM Studio URLs including protocol, port, and path handling with /v1 stripping
src/ipc/handlers/local_model_ollama_handler.ts Enhanced parseOllamaHost with whitespace trimming and IPv6 bracket handling; getOllamaApiUrl now reads from settings when env var not set; improved error messages with actual endpoint URL
src/components/LocalModelEndpointSettings.tsx New component providing UI for viewing/editing Ollama and LM Studio endpoints with save/reset functionality and validation
src/components/settings/ProviderSettingsPage.tsx Added local provider configuration support; integrated LocalModelEndpointSettings component for ollama/lmstudio providers with conditional rendering logic
src/hooks/useLanguageModelProviders.ts Added setup checks for ollama/lmstudio based on endpoint settings; included local providers in anyProviderSetup check

Sequence Diagram

sequenceDiagram
    participant User
    participant UI as ProviderSettingsPage
    participant Component as LocalModelEndpointSettings
    participant Hook as useSettings
    participant IPC as IpcClient
    participant Main as Main Process
    participant Settings as settings.ts
    participant Handler as OllamaHandler/LMStudioHandler
    
    User->>UI: Navigate to Ollama/LM Studio settings
    UI->>Component: Render LocalModelEndpointSettings
    Component->>Hook: useSettings()
    Hook->>IPC: getUserSettings()
    IPC->>Main: invoke("user-settings:get")
    Main->>Settings: readSettings()
    Settings-->>Main: UserSettings (with endpoints)
    Main-->>IPC: UserSettings
    IPC-->>Hook: UserSettings
    Hook-->>Component: settings with ollamaEndpoint/lmStudioEndpoint
    Component->>Component: Update local state via useEffect
    Component-->>User: Display current endpoint values
    
    User->>Component: Edit endpoint URL & click Save
    Component->>Component: Trim input value
    Component->>Hook: updateSettings({ ollamaEndpoint: value })
    Hook->>IPC: setUserSettings({ ollamaEndpoint: value })
    IPC->>Main: invoke("user-settings:set")
    Main->>Settings: writeSettings(newSettings)
    Settings-->>Main: Updated UserSettings
    Main-->>IPC: Updated UserSettings
    IPC-->>Hook: Updated UserSettings
    Hook->>Hook: Update userSettingsAtom
    Hook-->>Component: Updated settings
    Component->>Component: useEffect syncs local state
    Component-->>User: Show success toast
    
    User->>UI: Use Ollama features
    UI->>Handler: fetchOllamaModels()
    Handler->>Settings: readSettings()
    Settings-->>Handler: UserSettings with ollamaEndpoint
    Handler->>Handler: getOllamaApiUrl() uses settings.ollamaEndpoint
    Handler->>Handler: parseOllamaHost(endpoint)
    Handler->>Handler: fetch(`${apiUrl}/api/tags`)
    Handler-->>UI: List of models
Loading

@greptile-apps
Copy link
Contributor

greptile-apps bot commented Jan 2, 2026

Greptile found no issues!

From now on, if a review finishes and we haven't found any issues, we will not post anything, but you can confirm that we reviewed your changes in the status check section.

This feature can be toggled off in your Code Review Settings by deselecting "Create a status check for each PR".

@wwwillchen
Copy link
Collaborator

@BugBot run

Copy link
Collaborator

@wwwillchen wwwillchen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks for the changes! mostly looks good, a few minor changes and then i think we can merge

await po.page.getByText("Ollama", { exact: true }).click();
await po.page.waitForSelector('h1:has-text("Configure Ollama")', {
state: "visible",
timeout: 5000,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use Timeout.MEDIUM instead of hardcoding 5000 - same below

MEDIUM: process.env.CI ? 30_000 : 15_000,

selectedModel: LargeLanguageModelSchema,
providerSettings: z.record(z.string(), ProviderSettingSchema),
ollamaEndpoint: z.string(),
lmStudioEndpoint: z.string(),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let's make these optional properties

provider: "auto",
},
providerSettings: {},
ollamaEndpoint: DEFAULT_OLLAMA_ENDPOINT,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let's not set these here. instead, just use these default values when the field is undefined. otherwise, you'll need to rebaseline a lot of our e2e tests (see CI github actions)

}
if (provider === "lmstudio") {
return Boolean(settings?.lmStudioEndpoint?.trim());
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Local providers always appear configured, breaking setup flow

The isProviderSetup function for local providers checks Boolean(settings?.ollamaEndpoint?.trim()) and Boolean(settings?.lmStudioEndpoint?.trim()). Since DEFAULT_SETTINGS always initializes these with non-empty default values (http://localhost:11434 and http://localhost:1234), these checks will always return true. This means isAnyProviderSetup() will always return true when local providers exist, causing the setup banner to never prompt new users to configure AI access—even if they have no working providers configured.

Additional Locations (1)

Fix in Cursor Fix in Web

} catch (error) {
console.error("Error parsing URL:", error);
return urlString;
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Explicit port 80/443 overwritten with default port

The ensurePort function checks if (!url.port) to decide whether to add the default port. However, the JavaScript URL API normalizes default ports (80 for HTTP, 443 for HTTPS) to an empty string. If a user explicitly enters http://example.com:80 or https://example.com:443, the url.port property will be "", causing the code to overwrite their explicit port with 1234. While unlikely for LM Studio use cases, this could cause unexpected connection failures if someone runs the service behind a standard HTTP/HTTPS reverse proxy.

Fix in Cursor Fix in Web

@github-actions
Copy link

github-actions bot commented Jan 5, 2026

🎭 Playwright Test Results

❌ Some tests failed

OS Passed Failed Flaky Skipped
🍎 macOS 209 12 3 75
🪟 Windows 206 14 0 75

Summary: 415 passed, 26 failed, 3 flaky, 150 skipped

Failed Tests

🍎 macOS

  • local_provider_settings.spec.ts > Local provider endpoint settings persist
    • Error: expect(locator).toBeVisible() failed
  • release_channel.spec.ts > release channel - change from stable to beta and back
    • Error: expect(string).toMatchSnapshot(expected) failed
  • setup.spec.ts > setup ai provider
    • TimeoutError: locator.click: Timeout 30000ms exceeded.
  • smart_context_options.spec.ts > switching smart context mode saves the right setting
    • Error: expect(string).toMatchSnapshot(expected) failed
  • telemetry.spec.ts > telemetry - accept
    • Error: expect(string).toMatchSnapshot(expected) failed
  • telemetry.spec.ts > telemetry - reject
    • Error: expect(string).toMatchSnapshot(expected) failed
  • telemetry.spec.ts > telemetry - later
    • Error: expect(string).toMatchSnapshot(expected) failed
  • template-community.spec.ts > template - community
    • Error: expect(string).toMatchSnapshot(expected) failed
  • template-create-nextjs.spec.ts > create next.js app
    • Error: expect(string).toMatchSnapshot(expected) failed
  • thinking_budget.spec.ts > thinking budget
    • Error: expect(string).toMatchSnapshot(expected) failed
  • ... and 2 more

🪟 Windows

  • auto_update.spec.ts > auto update - disable and enable
    • Error: expect(string).toMatchSnapshot(expected) failed
  • context_manage.spec.ts > manage context - smart context
    • Error: expect(string).toMatchSnapshot(expected) failed
  • context_manage.spec.ts > manage context - smart context - auto-includes only
    • Error: expect(string).toMatchSnapshot(expected) failed
  • local_provider_settings.spec.ts > Local provider endpoint settings persist
    • Error: expect(locator).toBeVisible() failed
  • release_channel.spec.ts > release channel - change from stable to beta and back
    • Error: expect(string).toMatchSnapshot(expected) failed
  • setup.spec.ts > setup ai provider
    • TimeoutError: locator.click: Timeout 30000ms exceeded.
  • smart_context_options.spec.ts > switching smart context mode saves the right setting
    • Error: expect(string).toMatchSnapshot(expected) failed
  • supabase_migrations.spec.ts > supabase migrations
    • Error: ENOENT: no such file or directory, scandir 'C:\Users\RUNNER~1\AppData\Local\Temp\dyad-e2e-tests-1767651188601\dyad-apps\graceful-parrot-buzz\su...
  • telemetry.spec.ts > telemetry - accept
    • Error: expect(string).toMatchSnapshot(expected) failed
  • telemetry.spec.ts > telemetry - reject
    • Error: expect(string).toMatchSnapshot(expected) failed
  • ... and 4 more

⚠️ Flaky Tests

🍎 macOS

  • select_component.spec.ts > select component next.js (passed after 1 retry)
  • visual_editing.spec.ts > edit style of one selected component (passed after 2 retries)
  • visual_editing.spec.ts > discard changes (passed after 1 retry)

📊 View full report

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Allow to change the ollama and Lm Studio url

3 participants