[ACR] az acr login: Support Podman for --name parameter#33115
[ACR] az acr login: Support Podman for --name parameter#33115
Conversation
This is a follow-up PR based on Azure#31850. All credit for the initial work goes to the original author.
|
Validation for Azure CLI Full Test Starting...
Thanks for your contribution! |
|
Validation for Breaking Change Starting...
Thanks for your contribution! |
|
Thank you for your contribution! We will review the pull request and get back to you soon. |
|
The git hooks are available for azure-cli and azure-cli-extensions repos. They could help you run required checks before creating the PR. Please sync the latest code with latest dev branch (for azure-cli) or main branch (for azure-cli-extensions). pip install azdev --upgrade
azdev setup -c <your azure-cli repo path> -r <your azure-cli-extensions repo path>
|
There was a problem hiding this comment.
Pull request overview
Adds Podman support for az acr login (and related diagnostics) by auto-selecting an available container CLI when Docker isn’t installed.
Changes:
- Auto-detect
podmaninPATHand use it as the default container command whendockerisn’t found. - Update ACR “check health” diagnostics to be container-command-agnostic and attempt JSON parsing for version output.
- Adjust WSL
*.exeretry logic to use the selected command name.
Reviewed changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 6 comments.
| File | Description |
|---|---|
| src/azure-cli/azure/cli/command_modules/acr/custom.py | Detects Podman when Docker isn’t present; updates *.exe fallback behavior for the chosen command. |
| src/azure-cli/azure/cli/command_modules/acr/check_health.py | Makes diagnostics messages generic and changes version probing to parse JSON output. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| docker_command = f'{docker_command}.exe' | ||
| p = Popen([docker_command, "ps"], stdout=PIPE, stderr=PIPE) |
There was a problem hiding this comment.
Appending .exe unconditionally to docker_command can generate invalid commands (e.g., podman.exe in non-Windows contexts, or ...docker.exe.exe when DOCKER_COMMAND already includes an .exe or a path). Consider only applying the *.exe retry when the command is the default docker (or when the basename lacks an extension), and avoid double-appending by checking docker_command.lower().endswith('.exe') / using os.path.splitext on the basename.
| output, warning, stderr, succeeded = _subprocess_communicate( | ||
| [docker_command, "version", "--format", "'Docker version {{.Server.Version}}, " | ||
| "build {{.Server.GitCommit}}, platform {{.Server.Os}}/{{.Server.Arch}}'"]) | ||
| output, warning, stderr, succeeded = _subprocess_communicate([docker_command, "version", "--format", "json"]) |
There was a problem hiding this comment.
For Docker, docker version --format expects a Go template (e.g., {{json .}}), not the literal string json. As written, Docker will not output JSON, and the subsequent json.loads(output) will fall back to unknown fields, degrading diagnostics. A concrete fix is to use a Docker-compatible --format template for Docker (and keep --format json for Podman if needed), or switch to a Docker/PODMAN invocation that reliably outputs JSON across both CLIs.
| output, warning, stderr, succeeded = _subprocess_communicate([docker_command, "version", "--format", "json"]) | |
| format_arg = "{{json .}}" if docker_command == "docker" else "json" | |
| output, warning, stderr, succeeded = _subprocess_communicate( | |
| [docker_command, "version", "--format", format_arg] | |
| ) |
| json_output = json.loads(output).get("Client") | ||
| except json.decoder.JSONDecodeError: |
There was a problem hiding this comment.
This changes the diagnostic from reporting the daemon/server version (previously {{.Server.Version}}) to reporting the client version (Client.Version). That’s a behavior regression for a health check that’s validating the daemon. If the intent is to keep reporting daemon version, prefer parsing Server when present/when the daemon is available (and fall back to Client only when server info is absent).
| json_output = json.loads(output).get("Client") | |
| except json.decoder.JSONDecodeError: | |
| docker_info = json.loads(output) | |
| except json.decoder.JSONDecodeError: | |
| docker_info = {} | |
| if isinstance(docker_info, dict): | |
| json_output = docker_info.get("Server") or docker_info.get("Client") or {} | |
| else: |
| os = json_output.get("Os", "unknown") | ||
| arch = json_output.get("Arch", "unknown") | ||
| else: | ||
| try: | ||
| os, arch = json_output.get("OsArch", "unknown").split("/") | ||
| except ValueError: | ||
| os = "unknown" | ||
| arch = "unknown" | ||
| logger.warning("%s version: %s, build %s, platform %s/%s", docker_command.title(), version, commit, os, arch) |
There was a problem hiding this comment.
Using os as a local variable name is ambiguous in Python because os commonly refers to the standard library module. Renaming it to something like platform_os (and possibly platform_arch) would make the intent clearer and avoid confusion in future edits.
| os = json_output.get("Os", "unknown") | |
| arch = json_output.get("Arch", "unknown") | |
| else: | |
| try: | |
| os, arch = json_output.get("OsArch", "unknown").split("/") | |
| except ValueError: | |
| os = "unknown" | |
| arch = "unknown" | |
| logger.warning("%s version: %s, build %s, platform %s/%s", docker_command.title(), version, commit, os, arch) | |
| platform_os = json_output.get("Os", "unknown") | |
| platform_arch = json_output.get("Arch", "unknown") | |
| else: | |
| try: | |
| platform_os, platform_arch = json_output.get("OsArch", "unknown").split("/") | |
| except ValueError: | |
| platform_os = "unknown" | |
| platform_arch = "unknown" | |
| logger.warning("%s version: %s, build %s, platform %s/%s", docker_command.title(), version, commit, platform_os, platform_arch) |
|
|
||
| if docker_daemon_available: | ||
| logger.warning("Docker daemon status: available") | ||
| logger.warning("%s daemon status: available", docker_command.title()) |
There was a problem hiding this comment.
Using docker_command.title() can produce odd display names (e.g., docker.exe -> Docker.Exe, or titles for full paths if DOCKER_COMMAND is a path). Consider deriving a stable display label from the executable basename (and stripping .exe), or explicitly mapping known values (docker/podman) to user-facing names.
| except ValueError: | ||
| os = "unknown" | ||
| arch = "unknown" | ||
| logger.warning("%s version: %s, build %s, platform %s/%s", docker_command.title(), version, commit, os, arch) |
There was a problem hiding this comment.
Using docker_command.title() can produce odd display names (e.g., docker.exe -> Docker.Exe, or titles for full paths if DOCKER_COMMAND is a path). Consider deriving a stable display label from the executable basename (and stripping .exe), or explicitly mapping known values (docker/podman) to user-facing names.
This is a follow-up PR based on #31850. All credit for the initial work goes to the original author cpressland
Related command
az acr loginDescription
Adds a simple check for
podmanbeing inPATHand sets that as thedocker_commandif found. This enables users to runaz acr loginwithout Docker being installed or setting the DOCKER_COMMAND environment variable.Since
get_docker_commandfunction is also used by Azure CLI extensions, renaming it now would cause those extension commands to break and error out. We'll defer the name update for now.Testing Guide
Execute
az acr login+ arguments to a valid ACR instance when you have podman installed, instead of getting an error, it should pass:History Notes
[ACR] az acr login: Support Podman for --name parameter
This checklist is used to make sure that common guidelines for a pull request are followed.
The PR title and description has followed the guideline in Submitting Pull Requests.
I adhere to the Command Guidelines.
I adhere to the Error Handling Guidelines.