Skip to content

Commit 062b5e8

Browse files
authored
fix: resolve all ST1013, S1002, QF1011, and context leak issues (#1327)
* fix(cluster): linter warnings * fix(server): simplify staticcheck linter warnings - simplify bool comparisons (== false -> !) - use strings.TrimPrefix() directly without HasPrefix check - remove unnecessary blank identifier in range loops - remove empty if branches - fix unused error in server_peer.go * fix(server): fix ineffassign and dead code issues - remove ineffectual peerUser assignment in api.go - remove unused clientAddress assignment in api_database.go - fix ineffectual allTests initialization in regtest.go - convert dead if statement to for loop in server_del.go - ignore unused error from commitIter.ForEach in server_git.go * fix(server): fix uinfo and peerUser handling in peer handlers - remove unused uinfo from handlerMuxPeerNodes - add peerUser extraction in handlerMuxPeerClusters * fix(server): remove unused variables and functions - remove unused apiPass and apiUser variables from api.go - remove unused proxyToURL method from api.go - remove unused filewalked variable from api.go - remove unused handlerMuxClustersOld from api.go - remove unused handlerAgents and handlerLog from http.go - remove unused handlerMuxServicesBootstrap, handlerMuxSwitchReadOnly, handlerMuxClusterSSTStop from api_cluster.go - remove unused errorConnectVault field from server.go - remove unused overwriteConf variable from server_cmd.go * fix(server): remove unused imports and variables - remove unused regtest import from api.go - remove unused strconv import from http.go - remove unused clientAddress variable from api_database.go * fix(server): fix handlerMuxSlaveIndex issues - check json.Marshal error instead of ignoring it - use proper HTTP status codes (400, 403, 404, 500) - add Content-Type header - fix copylock warning by passing pointer to encoder - fix comment typo - improve error messages - remove redundant else blocks and return statements * refactor(server): remove redundant return statements in api_cluster.go * refactor(server): remove 3 more redundant return statements * refactor(server): remove all remaining redundant return statements in api_cluster.go * fix(server): fix critical bug in handlerMuxClusterSchemaMoveTable - fix wrong variable check: was checking mycluster instead of destcluster - add validation for required parameters (clusterShard, schemaName, tableName) - use proper HTTP status codes (400, 403, 404 instead of all 500) - fix typo in error message (Unrichable -> Unreachable) - add clear error message when no shard proxy found - improve code flow and readability with early returns * refactor(server): fix ST1013 and ST1005 linting issues in api_cluster.go ST1013 fixes (use http status constants): - replace 501 with http.StatusNotImplemented (4 occurrences) ST1005 fixes (error strings should not be capitalized): - "Setting not found" -> "setting not found" (4 occurrences) - "Unable to decode" -> "unable to decode" (2 occurrences) * refactor(lint): simplify boolean checks and control flow * feat(ci): add linter-fixer to the flow * fix(cluster): improve monitoring guards and config handling * refactor(cluster): remove unused code * refactor(cluster): resolve staticcheck quickfixes * fix: resolve all ST1013, S1002, QF1011, and context leak issues - Replace 307 numeric HTTP status codes with http.Status* constants (ST1013) - server/api*.go: 206 instances - server/api_database.go: 95 additional instances - server/api_proxy.go: 6 additional instances - Convert 40+ bool == false comparisons to negation operator (S1002) - server/api_database.go: Fixed all 40 comparisons - server/api_proxy.go: Fixed 2 comparisons - Remove 5 redundant type declarations from var assignments (QF1011) - server/api.go: 4 instances - server/api_database.go: 1 instance - server/api_cluster.go: 1 instance - server/server_cloud.go: 1 instance - Fix context.WithTimeout() leak by capturing cancel function - server/api.go:1844 - Convert proto Tag getters to return pointers to avoid lock copying - cluster/configurator: Return []*v3.Tag instead of []v3.Tag * refactor(cluster): redundant returns * chore(agents): update linter-fixer
1 parent d54e49b commit 062b5e8

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

76 files changed

+1538
-1877
lines changed

.claude/agents/linter-fixer.md

Lines changed: 73 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,73 @@
1+
---
2+
name: linter-fixer
3+
description: "Use this agent when you need to run lint checks on Go code and fix linting issues. This agent will execute linting tools in preferential order (golangci-lint-v2, golangci-lint, golint, or staticcheck) on packages, plan fixes, and request user approval before implementing them. Trigger this agent after writing or modifying Go code that needs quality assurance.\\n\\nExamples:\\n- <example>\\nContext: User has written new code in the cluster package and wants to ensure it passes linting.\\nuser: \"I've added some new monitoring functions to the cluster package. Can you check for linting issues?\"\\nassistant: \"I'll use the linter-fixer agent to run lint checks on the cluster package and identify any issues.\"\\n<function call to launch linter-fixer agent>\\ncommentary: The user has completed code changes and wants linting verification. Use the linter-fixer agent to run lint checks on the modified package.\\n</example>\\n- <example>\\nContext: User modified a single file but wants linting results specific to that file.\\nuser: \"I updated server/http.go with new API endpoints. Can you lint just that file?\"\\nassistant: \"I'll use the linter-fixer agent to run linting on the server package and isolate the results for http.go.\"\\n<function call to launch linter-fixer agent>\\ncommentary: The user is requesting linting for a specific file. Use the linter-fixer agent to run linting on the package and filter results for that file only.\\n</example>"
4+
tools: Bash, Grep, Edit
5+
model: sonnet
6+
color: green
7+
---
8+
9+
You are an expert Go linting and code quality specialist. Your role is to identify and fix linting issues in Go code while ensuring the developer maintains control over all changes.
10+
11+
## Core Responsibilities
12+
13+
1. **Tool Discovery and Selection**
14+
- Search for linting tools in this order of preference: golangci-lint-v2, golangci-lint, golint, staticcheck
15+
- Check standard locations: $PATH, go binaries directory (`$GOPATH/bin`, `$GOROOT/bin`), and common installation paths
16+
- If tools are not found, offer to install them before proceeding
17+
- Report which tool you're using and its version
18+
19+
2. **Package-Level Linting**
20+
- Always run linters on complete packages, not individual files
21+
- If a user requests linting for a single file, run the linter on the containing package and isolate/highlight results for that specific file
22+
- Identify the correct package path based on the file location and Go module structure
23+
24+
3. **Issue Analysis and Planning**
25+
- Run the selected linting tool and capture all output
26+
- Categorize issues by severity and type (e.g., unused variables, naming conventions, complexity, etc.)
27+
- Create a clear, organized summary of all issues found
28+
- Propose specific fixes for each issue, explaining the rationale
29+
- Group related fixes together logically
30+
31+
4. **User Review and Approval**
32+
- Present all proposed fixes in a clear, reviewable format
33+
- Show before/after code examples for each fix
34+
- Request explicit user approval before implementing any changes
35+
- Allow the user to approve all fixes, approve selectively, or request modifications to proposed fixes
36+
- Do not proceed with implementations without confirmed approval
37+
38+
5. **Implementation and Verification**
39+
- After approval, implement the fixes precisely as reviewed
40+
- Re-run the linter to verify that fixes resolved the issues
41+
- Report the results and confirm all targeted issues are resolved
42+
- Highlight any new issues that may have emerged during fixes
43+
44+
## Specific Behaviors
45+
46+
- **Single File Requests**: When a user specifies a single file, run linting on the package but clearly mark which issues belong to the requested file. Example: "Issues in server/http.go (5 issues) | Issues elsewhere in package (2 issues)"
47+
- **Tool Not Found**: If no linting tools are available, explain what each tool does and offer installation: "Would you like me to install golangci-lint? It's the most comprehensive option and includes golint, staticcheck, and many other linters."
48+
- **Large Issue Sets**: If linting finds many issues, organize by category and suggest tackling them in priority order (usually: errors > unused code > style issues)
49+
- **Project Context**: For the replication-manager project, be aware that code spans multiple packages (server/, cluster/, clients/, utils/, router/, etc.) and uses build tags. Apply linting appropriately for the package context.
50+
51+
## golangci-linter-v2 usage
52+
53+
- Always run the tool with the following options: `golangci-lint-v2 run --output.tab.path stdout --max-same-issues 0 --max-issues-per-linter 0`
54+
- Run specific linters with the --enable-only option, example: `--enable-only staticcheck`
55+
- In case the user has a specific list of errors that they want to look at, instead of using grep, make a temporary edit to the .golangci.yml file and revert this edit when the task is complete.
56+
57+
## Output Format
58+
59+
1. **Tool Discovery Report**: State which tool was found/selected and version
60+
2. **Issue Summary**: Count and categorize all issues
61+
3. **Detailed Findings**: For each issue, show:
62+
- File and line number
63+
- Issue description
64+
- Proposed fix with code example
65+
4. **Review Request**: Present all fixes and ask for approval
66+
5. **Post-Implementation Report**: Confirm changes made and verify resolution
67+
68+
## Decision Framework
69+
70+
- Prioritize automated safety over manual changes - always get user approval
71+
- Be conservative with formatting changes while aggressive about functional issues
72+
- When multiple fixes are possible, choose the most idiomatic Go solution
73+
- Do not modify code outside the scope of linting fixes without explicit permission

.golangci.yml

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,14 @@
1+
version: "2"
2+
3+
issues:
4+
max-issues-per-linter: 0
5+
max-same-issues: 0
6+
7+
linters:
8+
disable:
9+
- errcheck
10+
exclusions:
11+
rules:
12+
- linters:
13+
- staticcheck
14+
text: "ST1005:"

clients/client_configurator.go

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@ var dbResourceCategoryIndex int = 0
3737
var dbUsedTags []string
3838
var dbCategoryIndex int
3939
var dbTagIndex int
40-
var dbCurrentCategoryTags []v3.Tag
40+
var dbCurrentCategoryTags []*v3.Tag
4141
var dbUsedTagIndex int
4242
var PanIndex int
4343
var dbHost string
@@ -493,7 +493,7 @@ func cliDisplayConfigurator(configurator *configurator.Configurator) {
493493

494494
curWitdh = 1
495495

496-
dbCurrentCategoryTags = make([]v3.Tag, 0, len(tags))
496+
dbCurrentCategoryTags = make([]*v3.Tag, 0, len(tags))
497497
dbUsedTags = configurator.GetDBTags()
498498

499499
for _, tag := range tags {

cluster/app.go

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ import (
2424
type App struct {
2525
Id string `json:"id" groups:"apps"`
2626
Name string `json:"name" groups:"apps"`
27-
Type string `json:"type" groups:"apps""`
27+
Type string `json:"type" groups:"apps"`
2828
Host string `json:"host" groups:"apps"`
2929
HostIPV6 string `json:"hostIPV6"`
3030
Port string `json:"port" groups:"apps"`
@@ -122,19 +122,20 @@ func (app *App) Refresh() error {
122122
app.AppClusterSubstitute = sub
123123
}
124124

125-
if appState == stateMaintenance {
125+
switch appState {
126+
case stateMaintenance:
126127
app.SetState(stateMaintenance)
127-
} else if appState == stateAppRunning {
128+
case stateAppRunning:
128129
app.SetState(stateAppRunning)
129130
app.FailCount = 0
130-
} else if appState == stateFailed {
131+
case stateFailed:
131132
if app.FailCount >= cluster.Conf.MaxFail {
132133
app.SetState(stateFailed)
133134
} else {
134135
app.SetState(stateSuspect)
135136
app.FailCount++
136137
}
137-
} else if appState == stateAppWarning {
138+
case stateAppWarning:
138139
app.SetState(stateAppWarning)
139140
}
140141

cluster/app_chk.go

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -17,15 +17,16 @@ import (
1717

1818
func (app *App) GetMonitoringStatus() string {
1919
routes := app.GetAppConfig().Deployment.Routes
20-
var primaryStatus string = stateAppRunning
20+
var primaryStatus = stateAppRunning
2121
if len(routes) == 0 {
2222
return stateFailed
2323
}
2424

2525
routeStatuses := make([]config.RouteStatus, 0, len(routes))
2626
for _, route := range routes {
2727
routeStatus := config.RouteStatus{Route: route, Status: stateAppRunning}
28-
if route.Protocol == "https" {
28+
switch route.Protocol {
29+
case "https":
2930
httpStatus, _, err := app.GetAppHTTPStatus(route, false)
3031
if err != nil {
3132
routeStatus.Status = stateAppWarning
@@ -52,7 +53,7 @@ func (app *App) GetMonitoringStatus() string {
5253
}
5354
}
5455
}
55-
} else if route.Protocol == "tcp" {
56+
case "tcp":
5657
// For TCP routes, we assume the app is running if it can connect
5758
err := app.GetAppTCPStatus(route)
5859
if err != nil {
@@ -69,7 +70,7 @@ func (app *App) GetMonitoringStatus() string {
6970
}
7071
}
7172
}
72-
} else {
73+
default:
7374
app.ClusterGroup.SetState("APPERR004", state.State{ErrType: "WARN", ErrKey: "APPERR004", ErrDesc: fmt.Sprintf(config.ClusterError["APPERR004"], app.GetId(), route.Protocol), ServerUrl: app.Host})
7475
routeStatus.Status = stateFailed
7576

cluster/app_set.go

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -171,7 +171,6 @@ func (app *App) SwitchSetting(key string) error {
171171
return errors.New("unknown setting: " + key)
172172
}
173173

174-
return nil
175174
}
176175

177176
func (app *App) SetMaintenance(maintenance bool) {
@@ -218,7 +217,6 @@ func (app *App) SetAppProvisionByCredit(creditPlanSize int) error {
218217
return nil
219218
}
220219

221-
provCredit := creditPlanSize
222220
num_agents := len(app.GetAppAgents())
223221

224222
if num_agents == 0 {
@@ -229,7 +227,7 @@ func (app *App) SetAppProvisionByCredit(creditPlanSize int) error {
229227
}
230228

231229
// For flex provisioning, we divide the credit planned by the number of agents
232-
provCredit = creditPlanSize / num_agents
230+
provCredit := creditPlanSize / num_agents
233231

234232
baseCore, err := config.ParseUnitMeasurementToInt("0", app.ClusterGroup.Conf.ProvAppCpuCores, true)
235233
if err != nil {

cluster/cluster.go

Lines changed: 12 additions & 54 deletions
Original file line numberDiff line numberDiff line change
@@ -334,13 +334,6 @@ type VariableDiff struct {
334334
DiffValues []Diff `json:"diffValues"`
335335
}
336336

337-
const (
338-
stateClusterStart string = "Running starting"
339-
stateClusterDown string = "Running cluster down"
340-
stateClusterErr string = "Running with errors"
341-
stateClusterWarn string = "Running with warnings"
342-
stateClusterRun string = "Running"
343-
)
344337
const (
345338
ConstJobCreateFile string = "JOB_O_CREATE_FILE"
346339
ConstJobAppendFile string = "JOB_O_APPEND_FILE"
@@ -689,7 +682,7 @@ func (cluster *Cluster) Run() {
689682
cluster.Topology = config.TopoUnknown
690683
cluster.Unlock()
691684

692-
for cluster.exit == false {
685+
for !cluster.exit {
693686
if !cluster.Conf.MonitorPause {
694687
cluster.ServerIdList = cluster.GetDBServerIdList()
695688
cluster.ProxyIdList = cluster.GetProxyServerIdList()
@@ -893,23 +886,7 @@ func (cluster *Cluster) StateProcessing() {
893886
cluster.LogModulePrintf(cluster.Conf.Verbose, config.ConstLogModGeneral, config.LvlErr, "Fail of processing reseed for %s: %s", servertoreseed.URL, err)
894887
}
895888
}
896-
if s.ErrKey == "WARN0075" {
897-
/*
898-
This action is inactive due to direct function from Job
899-
*/
900-
// //Only mysqldump exists in the script
901-
// task := "reseed" + cluster.Conf.BackupLogicalType
902-
// cluster.LogModulePrintf(cluster.Conf.Verbose, config.ConstLogModGeneral, config.LvlInfo, "Sending master logical backup to reseed %s", s.ServerUrl)
903-
// if master != nil {
904-
// if mybcksrv != nil {
905-
// go cluster.SSTRunSender(mybcksrv.GetMyBackupDirectory()+"mysqldump.sql.gz", servertoreseed, task)
906-
// } else {
907-
// go cluster.SSTRunSender(master.GetMasterBackupDirectory()+"mysqldump.sql.gz", servertoreseed, task)
908-
// }
909-
// } else {
910-
// cluster.LogModulePrintf(cluster.Conf.Verbose, config.ConstLogModGeneral, config.LvlErr, "No master cancel backup reseeding %s", s.ServerUrl)
911-
// }
912-
}
889+
913890
if s.ErrKey == "WARN0076" && servertoreseed != nil {
914891
task := "flashback" + cluster.Conf.BackupPhysicalType
915892
err := servertoreseed.ProcessFlashbackPhysical(task)
@@ -921,31 +898,7 @@ func (cluster *Cluster) StateProcessing() {
921898
cluster.LogModulePrintf(cluster.Conf.Verbose, config.ConstLogModGeneral, config.LvlErr, "Fail of processing flashback for %s: %s", servertoreseed.URL, err)
922899
}
923900
}
924-
if s.ErrKey == "WARN0077" {
925-
/*
926-
This action is inactive due to direct function from rejoin
927-
*/
928-
// //Only mysqldump exists in the script
929-
// task := "flashback" + cluster.Conf.BackupLogicalType
930-
// cluster.LogModulePrintf(cluster.Conf.Verbose, config.ConstLogModGeneral, config.LvlInfo, "Sending logical backup to flashback reseed %s", s.ServerUrl)
931-
// if mybcksrv != nil {
932-
// go cluster.SSTRunSender(mybcksrv.GetMyBackupDirectory()+"mysqldump.sql.gz", servertoreseed, task)
933-
// } else {
934-
// go cluster.SSTRunSender(servertoreseed.GetMyBackupDirectory()+"mysqldump.sql.gz", servertoreseed, task)
935-
// }
936-
}
937-
/*
938-
// Unused, will be split to logical and physical backup. For rejoin will still use the same ReseedMasterSST
939-
if s.ErrKey == "WARN0101" {
940-
cluster.LogModulePrintf(cluster.Conf.Verbose, config.ConstLogModGeneral, config.LvlInfo, "Cluster have backup")
941-
for _, srv := range cluster.Servers {
942-
if srv.HasWaitBackupCookie() {
943-
cluster.LogModulePrintf(cluster.Conf.Verbose, config.ConstLogModGeneral, config.LvlInfo, "Server %s was waiting for backup", srv.URL)
944-
go srv.ReseedMasterSST()
945-
}
946-
}
947-
}
948-
*/
901+
949902
if s.ErrKey == "WARN0111" {
950903
cluster.LogModulePrintf(cluster.Conf.Verbose, config.ConstLogModGeneral, config.LvlInfo, "Cluster have logical backup")
951904
for _, srv := range cluster.Servers {
@@ -1245,7 +1198,7 @@ func (cluster *Cluster) SaveImmutableConfig() (bool, error) {
12451198

12461199
// Get Sorted Keys
12471200
keys := make([]string, 0)
1248-
for key, _ := range cluster.Conf.ImmuableFlagMap {
1201+
for key := range cluster.Conf.ImmuableFlagMap {
12491202
keys = append(keys, key)
12501203
}
12511204

@@ -1308,7 +1261,7 @@ func (cluster *Cluster) SaveCacheConfig() error {
13081261
defer file.Close()
13091262

13101263
keys := make([]string, 0)
1311-
for key, _ := range cluster.Conf.ImmuableFlagMap {
1264+
for key := range cluster.Conf.ImmuableFlagMap {
13121265
keys = append(keys, key)
13131266
}
13141267

@@ -1697,7 +1650,7 @@ func (cluster *Cluster) MonitorVariablesDiff() {
16971650
myvalues = append(myvalues, mastervalue)
16981651
for _, s := range cluster.slaves {
16991652
slaveVariables := s.Variables.ToNewMap()
1700-
if slaveVariables[k] != v && exceptVariables[k] != true {
1653+
if slaveVariables[k] != v && !exceptVariables[k] {
17011654
var slavevalue Diff
17021655
slavevalue.Server = s.URL
17031656
slavevalue.VariableValue = slaveVariables[k]
@@ -1799,7 +1752,11 @@ func (cluster *Cluster) MonitorMasterTableSchema() error {
17991752
if haschanged {
18001753
for _, pri := range cluster.Proxies {
18011754
if prx, ok := pri.(*MariadbShardProxy); ok {
1802-
if !(t.TableSchema == "replication_manager_schema" || strings.Contains(t.TableName, "_copy") == true || strings.Contains(t.TableName, "_back") == true || strings.Contains(t.TableName, "_old") == true || strings.Contains(t.TableName, "_reshard") == true) {
1755+
if t.TableSchema != "replication_manager_schema" &&
1756+
!strings.Contains(t.TableName, "_copy") &&
1757+
!strings.Contains(t.TableName, "_back") &&
1758+
!strings.Contains(t.TableName, "_old") &&
1759+
!strings.Contains(t.TableName, "_reshard") {
18031760
cluster.LogModulePrintf(cluster.Conf.Verbose, config.ConstLogModGeneral, config.LvlDbg, "blabla table %s %s %s", duplicates, t.TableSchema, t.TableName)
18041761
cluster.ShardProxyCreateVTable(prx, t.TableSchema, t.TableName, duplicates, false)
18051762
}
@@ -1979,6 +1936,7 @@ func (cluster *Cluster) MonitorQueryRules() {
19791936
proxyIds = append(proxyIds, prx.Id)
19801937
myRule.Proxies = strings.Join(proxyIds, ",")
19811938
}
1939+
myRule.Proxies = strings.Join(duplicates, ",")
19821940
} else {
19831941
myRule.Id = rule.Id
19841942
myRule.UserName = rule.UserName

cluster/cluster_app.go

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -348,7 +348,7 @@ func (cluster *Cluster) AddSeededApp(srv, port, dockerImg, template string) erro
348348

349349
if template != "" {
350350
resolvedContent, _ := cluster.ParseTemplateContent(app, content)
351-
newViper, err = cluster.LoadTemplateToViper(resolvedContent)
351+
newViper, _ = cluster.LoadTemplateToViper(resolvedContent)
352352
newViper.Set("app-host", srv)
353353
newViper.Set("app-port", port)
354354
newViper.Set("prov-app-docker-img", dockerImg)
@@ -711,7 +711,7 @@ func (cluster *Cluster) ParseTemplateContent(app *App, content []byte) ([]byte,
711711
}
712712

713713
// If the app cluster substitute is still empty, use the template as is
714-
var parsed string = string(content)
714+
var parsed = string(content)
715715
if app.AppClusterSubstitute != "" {
716716
parsed, err = cluster.ParseAppTemplate(string(content), app.AppClusterSubstitute)
717717
if err != nil {

cluster/cluster_bck.go

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -487,7 +487,7 @@ func (cluster *Cluster) CheckLogicalBackupToolVersion(server *ServerMonitor) err
487487
backupv, _ := version.NewVersionFromString(logical.BackupTool, logical.BackupToolVersion)
488488
if v.ToInt(2) != backupv.ToInt(2) { // Major and minor version must match
489489
cluster.SetState("WARN0156", state.State{ErrType: "WARNING", ErrDesc: fmt.Sprintf(clusterError["WARN0156"], v.ToString(), logical.BackupToolVersion), ErrFrom: "CHECK", ServerUrl: server.URL})
490-
return fmt.Errorf("Node %s backup tool version is not compatible with restore version.", server.URL)
490+
return fmt.Errorf("Node %s backup tool version is not compatible with restore version", server.URL)
491491
} else if cluster.IsInErrorState("WARN0156", server.URL) {
492492
// Remove state if version is now correct
493493
cluster.GetStateMachine().DeleteState(fmt.Sprintf("WARN0156@%s", server.URL))
@@ -509,7 +509,7 @@ func (cluster *Cluster) CheckPhysicalBackupToolVersion(server *ServerMonitor) er
509509
backupv, _ := version.NewVersionFromString(physical.BackupTool, physical.BackupToolVersion)
510510
if v.ToInt(2) != backupv.ToInt(2) { // Major and minor version must match
511511
cluster.SetState("WARN0157", state.State{ErrType: "WARNING", ErrDesc: fmt.Sprintf(clusterError["WARN0157"], v.ToString(), physical.BackupToolVersion), ErrFrom: "CHECK", ServerUrl: server.URL})
512-
return fmt.Errorf("Node %s backup tool version is not same with restore version.", server.URL)
512+
return fmt.Errorf("Node %s backup tool version is not same with restore version", server.URL)
513513
} else if cluster.IsInErrorState("WARN0157", server.URL) {
514514
// Remove state if version is now correct
515515
cluster.GetStateMachine().DeleteState(fmt.Sprintf("WARN0157@%s", server.URL))

0 commit comments

Comments
 (0)