-
Notifications
You must be signed in to change notification settings - Fork 2.2k
feat: allow to update llm urls #1620
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
3 issues found across 11 files
Prompt for AI agents (all 3 issues)
Understand the root cause of the following 3 issues and fix them.
<file name="src/ipc/utils/lm_studio_utils.ts">
<violation number="1" location="src/ipc/utils/lm_studio_utils.ts:8">
If the user enters an LM Studio URL with http/https but no port (for example `http://localhost`), this branch returns it unchanged so the default 1234 port is never added. The resulting base URL points to port 80, causing LM Studio requests to fail. Please ensure the default port is appended even when a protocol is present but no explicit port was provided.</violation>
</file>
<file name="src/__tests__/normalizeLmStudioBaseUrl.test.ts">
<violation number="1" location="src/__tests__/normalizeLmStudioBaseUrl.test.ts:43">
This test overrides LM_STUDIO_BASE_URL_FOR_TESTING but does not restore the previous value, so any existing setting is lost for later tests. Please capture the original value and restore it in a finally block or after the assertion to keep tests isolated.</violation>
</file>
<file name="src/components/LocalModelEndpointSettings.tsx">
<violation number="1" location="src/components/LocalModelEndpointSettings.tsx:99">
Using a single shared saving flag, this finally block resets it to null even when another save request is still running. If the other endpoint is saved before this call completes, its button becomes re-enabled and can resubmit while the request is still in flight. Guard the reset so only the request that set the flag clears it.</violation>
</file>
React with 👍 or 👎 to teach cubic. Mention @cubic-dev-ai to give feedback, ask questions, or re-run the review.
wwwillchen
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the pull request.
A couple of high-level thoughts:
- Given that dyad is currently using the
OLLAMA_HOSTenv var for ollama, is there a need to configure the ollama URL inside dyad? I'm not an expert in ollama, but it seems not super necessary to configure the ollama URL assuming we're picking up the env var correctly. For LM Studio, I could see this being useful as I'm not aware of any similar env var for it. - UX-wise: I think we should configure this in the Model Providers section and not in the AI section. Similar to how each of the cloud providers have their own dedicated page, we should display the local model providers in this grid and then allow users to navigate to the provider-specific settings page and configure the URL:
As @elsung mentioned, the primary use-case here is to allow people to have remote instances, and not having to reload each time their Dyad instance whenever they need to change their Ollama instance url. I agree it shouldn't happen a lot, but this is nice to have, to be able to change the url directly from the dashboard.
Got it, working on it! |
|
Sorry for the delay! Would something like that work? @wwwillchen
|
|
@Olyno no worries. I think the input should be inside the provider details page for Ollama and LM Studio respectively similar to how all the other providers work |
|
@wwwillchen Sounds good. Is something like that looking good, or would you prefer something else?
|
|
@Olyno looks good to me, thanks |
|
Changes applied @wwwillchen 🎉 🥳 Happy new year 2026 🥳 🎉 Let me know if there is anything which would require changes |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
2 issues found across 15 files
Prompt for AI agents (all issues)
Check if these issues are valid — if so, understand the root cause of each and fix them.
<file name="src/ipc/handlers/local_model_ollama_handler.ts">
<violation number="1" location="src/ipc/handlers/local_model_ollama_handler.ts:17">
P2: Duplicate unreachable code: The second `if (!host)` check is dead code since the function already returns at the first identical check. This block should be removed.</violation>
</file>
<file name="src/components/LocalModelEndpointSettings.tsx">
<violation number="1" location="src/components/LocalModelEndpointSettings.tsx:191">
P2: Missing `key` prop in `.map()` - React will emit a warning and may have rendering issues when the list changes. Wrap the rendered element with a Fragment containing a key.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
Greptile SummaryThis PR adds UI settings for customizing Ollama and LM Studio endpoint URLs, replacing hardcoded localhost values with user-configurable endpoints stored in Key Changes:
Implementation Quality:
Confidence Score: 4/5
Important Files Changed
Sequence DiagramsequenceDiagram
participant User
participant UI as ProviderSettingsPage
participant Component as LocalModelEndpointSettings
participant Hook as useSettings
participant IPC as IpcClient
participant Main as Main Process
participant Settings as settings.ts
participant Handler as OllamaHandler/LMStudioHandler
User->>UI: Navigate to Ollama/LM Studio settings
UI->>Component: Render LocalModelEndpointSettings
Component->>Hook: useSettings()
Hook->>IPC: getUserSettings()
IPC->>Main: invoke("user-settings:get")
Main->>Settings: readSettings()
Settings-->>Main: UserSettings (with endpoints)
Main-->>IPC: UserSettings
IPC-->>Hook: UserSettings
Hook-->>Component: settings with ollamaEndpoint/lmStudioEndpoint
Component->>Component: Update local state via useEffect
Component-->>User: Display current endpoint values
User->>Component: Edit endpoint URL & click Save
Component->>Component: Trim input value
Component->>Hook: updateSettings({ ollamaEndpoint: value })
Hook->>IPC: setUserSettings({ ollamaEndpoint: value })
IPC->>Main: invoke("user-settings:set")
Main->>Settings: writeSettings(newSettings)
Settings-->>Main: Updated UserSettings
Main-->>IPC: Updated UserSettings
IPC-->>Hook: Updated UserSettings
Hook->>Hook: Update userSettingsAtom
Hook-->>Component: Updated settings
Component->>Component: useEffect syncs local state
Component-->>User: Show success toast
User->>UI: Use Ollama features
UI->>Handler: fetchOllamaModels()
Handler->>Settings: readSettings()
Settings-->>Handler: UserSettings with ollamaEndpoint
Handler->>Handler: getOllamaApiUrl() uses settings.ollamaEndpoint
Handler->>Handler: parseOllamaHost(endpoint)
Handler->>Handler: fetch(`${apiUrl}/api/tags`)
Handler-->>UI: List of models
|
Greptile found no issues!From now on, if a review finishes and we haven't found any issues, we will not post anything, but you can confirm that we reviewed your changes in the status check section. This feature can be toggled off in your Code Review Settings by deselecting "Create a status check for each PR". |
|
@BugBot run |
wwwillchen
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks for the changes! mostly looks good, a few minor changes and then i think we can merge
| await po.page.getByText("Ollama", { exact: true }).click(); | ||
| await po.page.waitForSelector('h1:has-text("Configure Ollama")', { | ||
| state: "visible", | ||
| timeout: 5000, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
use Timeout.MEDIUM instead of hardcoding 5000 - same below
dyad/e2e-tests/helpers/test_helper.ts
Line 20 in 319d2b7
| MEDIUM: process.env.CI ? 30_000 : 15_000, |
| selectedModel: LargeLanguageModelSchema, | ||
| providerSettings: z.record(z.string(), ProviderSettingSchema), | ||
| ollamaEndpoint: z.string(), | ||
| lmStudioEndpoint: z.string(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let's make these optional properties
| provider: "auto", | ||
| }, | ||
| providerSettings: {}, | ||
| ollamaEndpoint: DEFAULT_OLLAMA_ENDPOINT, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let's not set these here. instead, just use these default values when the field is undefined. otherwise, you'll need to rebaseline a lot of our e2e tests (see CI github actions)
| } | ||
| if (provider === "lmstudio") { | ||
| return Boolean(settings?.lmStudioEndpoint?.trim()); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Local providers always appear configured, breaking setup flow
The isProviderSetup function for local providers checks Boolean(settings?.ollamaEndpoint?.trim()) and Boolean(settings?.lmStudioEndpoint?.trim()). Since DEFAULT_SETTINGS always initializes these with non-empty default values (http://localhost:11434 and http://localhost:1234), these checks will always return true. This means isAnyProviderSetup() will always return true when local providers exist, causing the setup banner to never prompt new users to configure AI access—even if they have no working providers configured.
Additional Locations (1)
| } catch (error) { | ||
| console.error("Error parsing URL:", error); | ||
| return urlString; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Explicit port 80/443 overwritten with default port
The ensurePort function checks if (!url.port) to decide whether to add the default port. However, the JavaScript URL API normalizes default ports (80 for HTTP, 443 for HTTPS) to an empty string. If a user explicitly enters http://example.com:80 or https://example.com:443, the url.port property will be "", causing the code to overwrite their explicit port with 1234. While unlikely for LM Studio use cases, this could cause unexpected connection failures if someone runs the service behind a standard HTTP/HTTPS reverse proxy.
🎭 Playwright Test Results❌ Some tests failed
Summary: 415 passed, 26 failed, 3 flaky, 150 skipped Failed Tests🍎 macOS
🪟 Windows
|





Description
This pull request introduces an interface to allow the users to update their LLM URLs inside Dyad.
Closes #816
Summary by cubic
Add settings to edit local LLL endpoints (Ollama and LM Studio) and route all local model traffic through these values. Users can now point Dyad to remote or custom hosts while keeping sensible defaults.
Written for commit 7460516. Summary will update on new commits.
Note
Introduces configurable local model endpoints with persistence and consistent usage across the app.
LocalModelEndpointSettingsUI on provider pages to view/update/resetollamaandlmstudioendpointsUserSettingswithollamaEndpointandlmStudioEndpointand default constants inconstants/localModelslocal_model_ollama_handlerreads from env or settings with improvedparseOllamaHost;local_model_lmstudio_handlerusesgetLMStudioBaseUrl; clearer connection errorslm_studio_utilsfor base URL normalization (protocol/port/trailing /v1 handling, IPv6-safe);get_model_clientuses settings-derived base URLs for Ollama and LM StudioProviderSettingsPagerenders endpoint settings for local providers;ProviderSettingsshows all providers;useLanguageModelProvidersmarks local providers as configured when endpoints are setWritten by Cursor Bugbot for commit 7460516. This will update automatically on new commits. Configure here.