Outils pour utilisateurs

Outils du site


blog

Cloud LVM autoresize partition PV

#! /bin/bash
 
PV_PART=$(pvdisplay -c |cut -d: -f1 |awk '{print $1}' |head -1)
PV_DISK="${PV_PART/[0-9]/}"
parted -s "$PV_DISK" print fix > /dev/null
PARTNB_EXTENDED=$(parted -s "$PV_DISK" print |awk '/extended$/ {print $1}')
PARTNB_LVM=$(parted -s "$PV_DISK" print |awk '/lvm$/ {print $1}')
 
if [ ! -z ${PARTNB_EXTENDED+x} ]; then parted "$PV_DISK" resizepart "$PARTNB_EXTENDED" 100% ; fi
parted -s "$PV_DISK" resizepart "$PARTNB_LVM" 100%
pvresize "$PV_PART"

A corriger :

  • Si plusieurs PV. Car head -1 n'est pas une solution

Fonctionnalité à ajouter

  • UEFI ?
  • Lancement via cloudinit
2025/03/24 15:06

Pb Python - urllib3 - Err OSError setuptools pkg resources pip wheel failed with error code 2

user@srv1:~/openstackcli$ virtualenv .
Running virtualenv with interpreter /usr/bin/python2
New python executable in /home/user1/openstackcli/bin/python2
Not overwriting existing python script /home/user1/openstackcli/bin/python (you must use /home/user1/openstackcli/bin/python2)
Installing setuptools, pkg_resources, pip, wheel...
  Complete output from command /home/user1/openstackcli/bin/python2 - setuptools pkg_resources pip wheel:
  Collecting setuptools
Exception:
Traceback (most recent call last):
  File "/usr/share/python-wheels/pip-9.0.1-py2.py3-none-any.whl/pip/basecommand.py", line 215, in main
    status = self.run(options, args)
  File "/usr/share/python-wheels/pip-9.0.1-py2.py3-none-any.whl/pip/commands/install.py", line 353, in run
    wb.build(autobuilding=True)
  File "/usr/share/python-wheels/pip-9.0.1-py2.py3-none-any.whl/pip/wheel.py", line 749, in build
    self.requirement_set.prepare_files(self.finder)
  File "/usr/share/python-wheels/pip-9.0.1-py2.py3-none-any.whl/pip/req/req_set.py", line 380, in prepare_files
    ignore_dependencies=self.ignore_dependencies))
  File "/usr/share/python-wheels/pip-9.0.1-py2.py3-none-any.whl/pip/req/req_set.py", line 554, in _prepare_file
    require_hashes
  File "/usr/share/python-wheels/pip-9.0.1-py2.py3-none-any.whl/pip/req/req_install.py", line 278, in populate_link
    self.link = finder.find_requirement(self, upgrade)
  File "/usr/share/python-wheels/pip-9.0.1-py2.py3-none-any.whl/pip/index.py", line 465, in find_requirement
    all_candidates = self.find_all_candidates(req.name)
  File "/usr/share/python-wheels/pip-9.0.1-py2.py3-none-any.whl/pip/index.py", line 423, in find_all_candidates
    for page in self._get_pages(url_locations, project_name):
  File "/usr/share/python-wheels/pip-9.0.1-py2.py3-none-any.whl/pip/index.py", line 568, in _get_pages
    page = self._get_page(location)
  File "/usr/share/python-wheels/pip-9.0.1-py2.py3-none-any.whl/pip/index.py", line 683, in _get_page
    return HTMLPage.get_page(link, session=self.session)
  File "/usr/share/python-wheels/pip-9.0.1-py2.py3-none-any.whl/pip/index.py", line 792, in get_page
    "Cache-Control": "max-age=600",
  File "/home/user1/openstackcli/share/python-wheels/requests-2.12.4-py2.py3-none-any.whl/requests/sessions.py", line 501, in get
    return self.request('GET', url, **kwargs)
  File "/usr/share/python-wheels/pip-9.0.1-py2.py3-none-any.whl/pip/download.py", line 386, in request
    return super(PipSession, self).request(method, url, *args, **kwargs)
  File "/home/user1/openstackcli/share/python-wheels/requests-2.12.4-py2.py3-none-any.whl/requests/sessions.py", line 488, in request
    resp = self.send(prep, **send_kwargs)
  File "/home/user1/openstackcli/share/python-wheels/requests-2.12.4-py2.py3-none-any.whl/requests/sessions.py", line 609, in send
    r = adapter.send(request, **kwargs)
  File "/home/user1/openstackcli/share/python-wheels/CacheControl-0.11.7-py2.py3-none-any.whl/cachecontrol/adapter.py", line 47, in send
    resp = super(CacheControlAdapter, self).send(request, **kw)
  File "/home/user1/openstackcli/share/python-wheels/requests-2.12.4-py2.py3-none-any.whl/requests/adapters.py", line 423, in send
    timeout=timeout
  File "/home/user1/openstackcli/share/python-wheels/urllib3-1.19.1-py2.py3-none-any.whl/urllib3/connectionpool.py", line 643, in urlopen
    _stacktrace=sys.exc_info()[2])
  File "/home/user1/openstackcli/share/python-wheels/urllib3-1.19.1-py2.py3-none-any.whl/urllib3/util/retry.py", line 315, in increment
    total -= 1
TypeError: unsupported operand type(s) for -=: 'Retry' and 'int'
----------------------------------------
...Installing setuptools, pkg_resources, pip, wheel...done.
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/virtualenv.py", line 2375, in <module>
    main()
  File "/usr/lib/python3/dist-packages/virtualenv.py", line 724, in main
    symlink=options.symlink)
  File "/usr/lib/python3/dist-packages/virtualenv.py", line 992, in create_environment
    download=download,
  File "/usr/lib/python3/dist-packages/virtualenv.py", line 922, in install_wheel
    call_subprocess(cmd, show_stdout=False, extra_env=env, stdin=SCRIPT)
  File "/usr/lib/python3/dist-packages/virtualenv.py", line 817, in call_subprocess
    % (cmd_desc, proc.returncode))
OSError: Command /home/user1/openstackcli/bin/python2 - setuptools pkg_resources pip wheel failed with error code 2

Solution

Vérif le réseau et le proxy

export http_proxy=http://192.168.22.20:3128
export https_proxy=http://192.168.22.20:3128
2025/03/24 15:06

Pb Podman container still Stopping

After executing podman system migrate, the container status remained stopping.
Voir Pb podman - podman system migrate

$ podman ps -a
CONTAINER ID  IMAGE                                                                         COMMAND               CREATED       STATUS      PORTS       NAMES
c5c069775351  aap.acme.local/ExecEnv1:0.9.6  ssh-agent sh -c t...  20 hours ago  Stopping                ansib
le_runner_241060
b242fe99cb8f  aap.acme.local/ee-supported-rhel8:latest        ansible-playbook ...  11 hours ago  Stopping                ansib
le_runner_241722

$ podman stop c5c069775351
ERRO[0000] Unable to clean up network for container c5c0697753515cb6ed3a2fdf76d9bcd5248160199dc9d26bfb7953f1de5e9e07: "unmounting network namespace for container c5c0697753515cb6ed3a2fdf76d9bcd5248160199dc9d26bfb7953f1de5e9e07: failed to unmount NS: at /tmp/podman-run-1000/netns/netns-ef973d33-1d85-2e00-b4fa-dc2ef6b6f811: invalid argument"
c5c069775351

$ podman stop c5c069775351
c5c069775351

$ podman stop c5c069775351
c5c069775351

$ podman rm c5c069775351
Error: cannot remove container c5c069775351fca6e99d9b43354e615671cdf0d150def3259e9bc97399db3e96 as it is stopping - running or paused containers cannot be removed without force: container state improper

$ podman rm -f c5c069775351
ERRO[0000] Free container lock: no such file or directory
c5c069775351

ou encore

#podman container kill -a
podman container rm -fa
2025/03/24 15:06

Pb podman - podman system migrate

Voir sur RedHat :

Notes :

Erreur

potentially insufficient UIDs or GIDs available in user namespace 

ou encore, suite à une mise à jour de Podman

ERRO[0000] invalid internal status, try resetting the pause process with "/usr/bin/podman system migrate": could not find any running process: no such process

Étapes à suivre pour reproduire

Remise à l'état initial
# egrep " setup-" /var/log/dnf.rpm.log |grep Upgraded | tail -1
2023-09-27T17:52:38+0200 SUBDEBUG Upgraded: setup-2.12.2-5.el8.noarch
yum install -y setup-2.12.2-5.el8.noarch
# sudo -u awx -i podman ps ; echo $?
CONTAINER ID  IMAGE       COMMAND     CREATED     STATUS      PORTS       NAMES
0
sudo -u awx -i podman system migrate

Un faut lancer au moins un container

sudo -u awx -i podman run -d monimage:latest sleep inf
Reproduction
yum install setup
sudo -u awx -i podman ps

Pas d'erreur, car nous n'avons pas encore faite de reboot

Redémarrons

reboot
$ sudo -u awx -i podman ps ; echo $?
ERRO[0000] invalid internal status, try resetting the pause process with "podman system migrate": could not find any running process: no such process
1
Solution
sudo -u awx -i podman system migrate

Script palliatif

Maintenance palliative - script pour automatiser la commande podman system migrate quand cela est nécessaire. Cela évite les indisponibilité de podman en cas de reboot.

autofix_podman_system_migrate.sh

#! /bin/bash
 
# QUI:          Script écrit par JB. Il doit être lancé avec le(s) compte(s) utilisateur(s) exécutant des containers podman.
# QUOI:         Voir le ticket #03618727
# POURQUOI:     Bug indispo podman après reboot si certains paquets tels que podman, setup... ont été mis à jour.
# QUAND:        Script lancé au boot
# COMMENT:      Avec une crontab tel que : 
#                    '@reboot /var/lib/awx/scripts/autofix_podman_system_migrate.sh'
#   ou si cron root  '@reboot sudo -u awx -i /var/lib/awx/scripts/autofix_podman_system_migrate.sh'
#   ou encore avec SystemD
 
if podman ps 2>&1 | grep -q 'podman system migrate'
then
        podman system migrate
 
        # Fix containers still in Stopping state
        if [[ "$USER" == 'awx' ]]
        then
                sleep 1
                podman rm -f "$(podman ps -a | grep -v 'seconds' | awk '/Stopping/ {print $1}')" 2>/dev/null || true
        fi
fi

autofix_podman_system_migrate.service

[Unit]
Description=Autofix podman system migrate
 
[Service]
Type=oneshot
ExecStart=/bin/bash /var/lib/awx/scripts/autofix_podman_system_migrate.sh
RemainAfterExit=yes
User=awx
 
[Install]
WantedBy=receptor.service

Autres

    - name: Ensure changes are applied to podman
      command: podman system migrate
      environment:
        XDG_RUNTIME_DIR: "{{ podman_tmp.path }}"
2025/03/24 15:06

Pb podman - image is in use by a container

$ podman images |grep '<none>'
<none>                                                                           <none>      38808a9199c1  2 months ago   294 MB
<none>                                                                           <none>      e5bb0bda807d  3 months ago   371 MB

$ podman ps -a
CONTAINER ID  IMAGE       COMMAND     CREATED     STATUS      PORTS       NAMES

$ podman rmi 38808a9199c1
Error: image used by af0c32ff54054be71254ea980c9181dba8068ecac9e9e69b7190479602eb9721: image is in use by a container: consider listing external containers $
nd force-removing image

$ podman ps -a --storage |grep af0c3
af0c32ff5405  docker.io/library/259c91e8558f1ecb99392f6bc7ef9ea5320179b1e35b1c7c3101818f2547a453-tmp:latest  buildah     2 months ago  Storage
  38808a9199c1-working-container

Solution

buildah rm --all

ou encore

podman rmi 38808a9199c1 -f

Profitons-en pour faire un peu de nettoyage

podman image prune

Ou encore

podman system prune 
2025/03/24 15:06
blog.txt · Dernière modification : de 127.0.0.1

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki