1

I am trying to run google cloud python sdk from inside a k8 pod, running on google compute engine. There is a service account attached to the VM, which is giving it access to the secrets manager. I am able to access secrets manager from the host, however running the python sdk from k8 pod complains of not able to access the metadata service

>>> secret_id = 'unskript_test'
>>> name = client.secret_path(project_id, secret_id)
>>> response = client.get_secret(request={"name": name})
Traceback (most recent call last):
  File "/opt/conda/lib/python3.7/site-packages/google/api_core/grpc_helpers.py", line 67, in error_remapped_callable
    return callable_(*args, **kwargs)
  File "/opt/conda/lib/python3.7/site-packages/grpc/_channel.py", line 946, in __call__
    return _end_unary_response_blocking(state, call, False, None)
  File "/opt/conda/lib/python3.7/site-packages/grpc/_channel.py", line 849, in _end_unary_response_blocking
    raise _InactiveRpcError(state)
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
    status = StatusCode.UNAVAILABLE
    details = "Getting metadata from plugin failed with error: Failed to retrieve http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/?recursive=true from the Google Compute Enginemetadata service. Compute Engine Metadata server unavailable"
    debug_error_string = "{"created":"@1630634901.103779641","description":"Getting metadata from plugin failed with error: Failed to retrieve http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/?recursive=true from the Google Compute Enginemetadata service. Compute Engine Metadata server unavailable","file":"src/core/lib/security/credentials/plugin/plugin_credentials.cc","file_line":90,"grpc_status":14}"
>

metadata.google.internal doesnt get resolved from the k8 pod

jovyan@jovyan-25ca6c8c-157d-49e5-9366-f9d57fcb7a9f:~$ wget http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/?recursive=true
--2021-09-03 02:11:19--  http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/?recursive=true
Resolving metadata.google.internal (metadata.google.internal)... failed: Name or service not known.
wget: unable to resolve host address ‘metadata.google.internal’

However, host is able to resolve it

ubuntu@gcp-test-proxy:~$ wget http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/?recursive=true
--2021-09-03 02:11:27--  http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/?recursive=true
Resolving metadata.google.internal (metadata.google.internal)... 169.254.169.254
Connecting to metadata.google.internal (metadata.google.internal)|169.254.169.254|:80... connected.
HTTP request sent, awaiting response... 403 Forbidden
2021-09-03 02:11:27 ERROR 403: Forbidden.

How can i make the pod resolve metadata.google.internal?

Michael Hampton
  • 237,123
  • 42
  • 477
  • 940
Amit
  • 13
  • 3
  • Hello @Amit. Was your problem resolved? If yes, can you mention the steps you have taken to solve the problem. Or if the given answer helped can you accept or upvote it.. – Bakul Mitra Mar 23 '22 at 04:55

1 Answers1

0

This is caused by Kubernetes pod not being able to resolve the metadata.google.internal DNS name to proper IP. You host machine probably has an entry in /etc/hosts that hardcodes that domain to the IP: 169.254.169.254.

You should be able to replicate that in your Pod by modifying its /etc/hosts file.

Just remember that this will only work on VMs running on GCP. Outside, the IP address 169.254.169.254 is just another IP address that has not special meaning.

EDIT: Just checked /etc/hosts on one of my GCP VMs, and here's what I found:

$ cat /etc/hosts
127.0.0.1 localhost

# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
169.254.169.254 metadata.google.internal metadata

So just try copying that last line to your pods /etc/hosts.

Maciek
  • 151
  • 3
  • Thanks @maciek. The issue was with microk8s not copying host etc hosts entry to the pods. Once we moved to k3, it got resolved – Amit Mar 24 '22 at 14:22