Outils pour utilisateurs

Outils du site


blog

Notes VNC

Voir aussi :

  • ssvnc (VNC sécurisé)
  • xdmcp (Autre protocole)

Alternatives :

Client TightVNC

Install sur Debian

apt-get install xtightvncviewer

Enable / Disable full screen
Ctrl + Alt + Shift + F

Client SSVNC (TightVNC over SSH SSL/TLS)

Exemple de conf VNC over SSH

VNC Host Display juan@192.168.10.4
Proxy/Gateway
Remote SSH Command ssh juan@127.0.0.1
Use SSH

profiles/192.168.10.4.vnc

[connection]
host=process@192.168.10.4  cmd=ssh juan@127.0.0.1
port=5900
proxyhost=
proxyport=
disp=process@192.168.10.4  cmd=ssh juan@127.0.0.1
 
[options]
use_ssh=1
use_ssl=0

Notes : l'adresse IP utilisée pour se connecter en SSH est bien 192.168.10.4.

Serveur TightVNC

Install

sudo apt-get install tightvncserver

Lancer le serveur

tightvncserver

Au premier lancement, les fichiers suivants serons crées :

  • ~/.vnc/xstartup
  • ~/.vnc/passwd

Arrêt du serveur

vncserver -kill :1

Le serveur est-il démarré

ps -ef | grep Xtightvnc

Voir logs :

  • ~/.xsession-errors
  • ~/.vnc/*.log
Wayland

Voir VNC Notes Wayland

Pb
Erreur : Oh no! Something has gone wrong
vnc Failed to recv data from socket

Sur Ultra VNC cocher - Multi viewer connections - Disconnect all existing connections

Xlib: extension "DPMS" missing on display

Cela peut venir si vous lancer un server VNC sous Wayland

Une solution est de désactiver Wayland Voir Désactiver Wayland et repasser à Xorg Notes Wayland

Serveur x11vnc

Voir https://doc.ubuntu-fr.org/x11vnc

x11vnc -storepasswd P@ssw0rd ~/.vnc/passwd

Exemple

#x11vnc -noxrecord -noxfixes -noxdamage -display :0 -usepw -forever
x11vnc -noxrecord -noxfixes -noxdamage -usepw -forever -viewonly -no6 -noipv6 -notruecolor -nolookup -nodragging -nevershared
-viewonly
-tightfilexfer
-ultrafilexfer
-nopw
-unixpw

Le fichier si existe sera automatiquement exécuté :
$HOME/.x11vncrc

~/.config/autostart/x11vnc.desktop

[Desktop Entry]
Type=Application
Name=x11vnc
Exec=x11vnc -noxrecord -noxfixes -noxdamage -usepw -forever -viewonly -no6 -noipv6 -notruecolor -nolookup -nodragging -nevershared -tightfilexfer -listen localhost
Sécurité

Vous pouvez ajouté l'option -listen localhost au serveur x11vnc puis avec le client faire un tunnel SSH. ssvnc fait cela automatiquement.

Pb pas de pavé numérique

Solution : supprimer l'option -xkb

2025/03/24 15:06

Notes VMWare OpenStack VIO

Pb HTTP Error 503

Solution

Si connecter sur chaque controller et :

service apache2 restart

Ansible

sudo mkdir -p /opt/vmware/vio/custom
sudo cp /var/lib/vio/ansible/custom/custom.yml.sample /opt/vmware/vio/custom/custom.yml
# The maximum number of entities that will be returned in a collection, with no
# limit set by default.
#keystone_list_limit: 100
keystone_list_limit: 500
sudo viocli deployment configure

Source : https://docs.vmware.com/en/VMware-Integrated-OpenStack/5.1/integrated-openstack-51-administration-guide.pdf

role policy.yaml

2025/03/24 15:06

Notes VMWare OpenStack VIO - Configuration

Voir :

Config :

/etc/keystone/keystone.conf

[DEFAULT]                          
public_endpoint = https://192.168.51.61:5000/
admin_endpoint = https://192.168.51.61:35357/
member_role_name = _member_
list_limit = 500                                     
insecure_debug = False                               
debug = True                                
log_file = keystone.log
log_dir = /var/log/keystone
use_syslog = true
syslog_log_facility = LOG_LOCAL7      
default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,oslo_messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.d
ogpile=INFO,dogpile.lock=INFO
 
[auth]
methods = password,token,saml2,openid,mapped
 
[cache]
backend = oslo_cache.memcache_pool
enabled = True
memcache_servers = 192.168.51.65:11211,192.168.51.66:11211
 
[database]
connection = CHANGEME
 
[federation]
trusted_dashboard = https://192.168.21.53/auth/websso/
trusted_dashboard = https://192.168.51.61/auth/websso/
 
[fernet_tokens]
max_active_keys = 2
 
[identity]
domain_specific_drivers_enabled = true
domain_configurations_from_database = False
 
[oslo_policy]
policy_file = /etc/keystone/policy.yaml
 
[resource]
admin_project_domain_name = Default
admin_project_name = admin
 
[saml2]
remote_id_attribute = Shib-Identity-Provider
 
[token]
expiration = 7200

/etc/keystone/domains/keystone.acme.conf

[identity]
domain_configurations_from_database = False
driver = ldap
list_limit = 500
 
[ldap]
query_scope = sub
group_name_attribute = sAMAccountName
group_objectclass = group
user_mail_attribute = mail
user_enabled_attribute = userAccountControl
group_tree_dn = CN=Openstack,OU=Groupes,DC=acme,DC=local
chase_referrals = false
user_id_attribute = sAMAccountName
group_members_are_ids = true
group_member_attribute = memberUid
page_size = 100
use_tls = false
url = ldaps://ldap.acme.local:636
user_name_attribute = sAMAccountName
user = admin
user_objectclass = organizationalPerson
group_id_attribute = cn
user_filter = (memberOf=CN=Openstack,OU=Groupes,DC=acme,DC=local)
group_desc_attribute = description
user_tree_dn = DC=acme,DC=local
user_pass_attribute = userPassword
password = CHANGEME

/etc/nova/nova.conf

[DEFAULT]                                                                                                                                                                                     
log_dir = /var/log/nova                       
lock_path = /var/lock/nova                                                                                                 
state_path = /var/lib/nova    
 
[api_database]                             
connection = sqlite:////var/lib/nova/nova_api.sqlite                                                                                                            
 
[cells]
enable = False
 
[database]
connection = sqlite:////var/lib/nova/nova.sqlite
 
[placement]
os_region_name = openstack

/etc/nova/nova-compute.conf

[DEFAULT]                                                              
compute_driver = vmwareapi.VMwareVCDriver      
allow_resize_to_same_host = true              
remove_unused_original_minimum_age_seconds = 86400
cpu_allocation_ratio = 10                                                                                                                                                                                          
ram_allocation_ratio = 1.5                                                                                                                                                             
disk_allocation_ratio = 0.0                                                                                                                                                            
resume_guests_state_on_host_boot = true
max_concurrent_builds = 20
block_device_allocate_retries = 1800         
heal_instance_info_cache_interval = 120                      
block_device_allocate_retries_interval = 2               
force_config_drive = False
dhcpbridge_flagfile = /etc/nova/nova.conf                      
dhcpbridge = /usr/bin/nova-dhcpbridge
metadata_host = 192.168.51.61                         
dhcp_domain = novalocal                                                                                                                                                                
web = /usr/share/vmware-mks
state_path = /var/lib/nova
periodic_fuzzy_delay = 120
debug = True
verbose = True
log_dir = /var/log/nova
use_syslog = true
syslog_log_facility = LOG_LOCAL7
rpc_response_timeout = 120
sync_power_state_action = dbsync
use_hypervisor_stats = True
 
[api]
use_forwarded_for = true
compute_link_prefix = https://192.168.21.53:8774
glance_link_prefix = https://192.168.21.53:9292
 
[api_database]
connection = "CHANGEME"
max_pool_size = 50
max_overflow = 50
 
[cache]
enabled = false
 
[cinder]
endpoint_template = https://192.168.51.61:8776/v3/%(project_id)s
api_insecure = true
 
[conductor]
workers = 2
 
[database]
connection = "CHANGEME"
 
[filter_scheduler]
max_io_ops_per_host = 8
max_instances_per_host = 50
 
[glance]
api_servers = https://192.168.51.61:9292
 
[keystone_authtoken]
memcached_servers = 192.168.51.65:11211,192.168.51.66:11211
auth_type = v3password
auth_url = https://192.168.51.61:35357/v3
project_name = service
username = nova
password = CHANGEME
project_domain_name = local
user_domain_name = local
 
[mks]
mksproxy_base_url = https://192.168.21.53:6090/vnc_auto.html
enabled = true
 
[neutron]
url = https://192.168.51.61:9696
service_metadata_proxy = true
metadata_proxy_shared_secret = CHANGEME
auth_type = v3password
auth_url = https://192.168.51.61:35357/v3
project_name = service
project_domain_name = local
username = neutron
user_domain_name = local
password = CHANGEME
 
[oslo_concurrency]
lock_path = /var/lock/nova
 
[oslo_messaging_rabbit]
rabbit_hosts = 192.168.51.62,192.168.51.63,192.168.51.64
rabbit_userid = test
rabbit_password = CHANGEME
rabbit_ha_queues = true
 
[oslo_messaging_zmq]
rpc_thread_pool_size = 100
 
[pci]
passthrough_whitelist = [{"vendor_id": "*", "product_id": "*"}]
 
[placement]
os_region_name = nova
os_interface = internal
auth_type = v3password
auth_url = https://192.168.51.61:35357/v3
project_name = service
project_domain_name = local
username = neutron
user_domain_name = local
password = CHANGEME
 
[oslo_concurrency]
lock_path = /var/lock/nova
 
[oslo_messaging_rabbit]
rabbit_hosts = 192.168.51.62,192.168.51.63,192.168.51.64
rabbit_userid = test
rabbit_password = CHANGEME
rabbit_ha_queues = true
 
[oslo_messaging_zmq]
rpc_thread_pool_size = 100
 
[pci]
passthrough_whitelist = [{"vendor_id": "*", "product_id": "*"}]
 
[placement]
os_region_name = nova
os_interface = internal
auth_type = v3password                                                                                                                                                                                             
auth_url = https://192.168.51.61:35357/v3
project_name = service
project_domain_name = local
username = placement
user_domain_name = local
password = CHANGEME
 
[vmware]
serial_port_service_uri = s1cb9is4rC66cr000791
serial_port_proxy_uri = telnets://192.168.51.71:13370#thumbprint=A9:CF:EC:E6:DD:00:6A:90:C4:F7:4B:83:11:C9:70:42:13:A9:08:36
serial_log_dir = /var/log/vspc
host_ip = 192.168.51.160
host_username = Administrator@vsphere.local
host_password = CHANGEME
insecure = True
cluster_name = Production
datastore_regex = production
vnc_port_total = 6500
use_linked_clone = True
cache_prefix = VIO_9a9c86dc379144d7a4f43919d9066315_b78814fd_domain-c34
store_image_dir = /images
snapshot_format = template
import_vm_enabled = True
import_vm_relocate = True
tenant_vdc = False
passthrough = False
 
[vnc]
enabled = False
vncserver_proxyclient_address = 192.168.51.160
novncproxy_base_url = https://192.168.21.53:6080/vnc_auto.html
 
[wsgi]
api_paste_config = /etc/nova/api-paste.ini

javascript /etc/glance/policy.json

{
    "context_is_admin":  "role:admin",
    "default": "role:admin",
 
    "add_image": "",
    "delete_image": "",
    "get_image": "",
    "get_images": "",
    "modify_image": "",
    "publicize_image": "role:admin",
    "communitize_image": "",
    "copy_from": "",
 
    "download_image": "",
    "upload_image": "",
 
    "delete_image_location": "",
    "get_image_location": "",
    "set_image_location": "",
 
    "add_member": "",
    "delete_member": "",
    "get_member": "",
    "get_members": "",
    "modify_member": "",
 
    "manage_image_cache": "role:admin",
 
    "get_task": "",
    "get_tasks": "",
    "add_task": "",
    "modify_task": "",
 
    "deactivate": "",
    "reactivate": "",
 
    "get_metadef_namespace": "",
    "get_metadef_namespaces":"",
    "modify_metadef_namespace":"",
    "add_metadef_namespace":"",
 
    "get_metadef_object":"",
    "get_metadef_objects":"",
    "modify_metadef_object":"",
    "add_metadef_object":"",
 
    "list_metadef_resource_types":"",
    "get_metadef_resource_type":"",
    "add_metadef_resource_type_association":"",
 
    "get_metadef_property":"",
    "get_metadef_properties":"",
    "modify_metadef_property":"",
    "add_metadef_property":"",
 
    "get_metadef_tag":"",
    "get_metadef_tags":"",
    "modify_metadef_tag":"",
    "add_metadef_tag":"",
    "add_metadef_tags":"" 
}

Console FR /?locale=fr_FR
Voir https://kb.vmware.com/s/article/1016403

2025/03/24 15:06

Notes virtualisation

Gestion Automatique d’Environnement Virtuel :

OpenVZ

OpenVZ Proxmox

LXC vs OpenVZ

Savoir si on est dans une VM

# SystemD
systemd-detect-virt
hostnamectl
 
virt-what
virtdetect
 
grep -q '^flags.* hypervisor' /proc/cpuinfo
jean@vps1:~$ systemd-detect-virt --vm
vmware
jean@vps1:~$ systemd-detect-virt
openvz

Suis-je dans un conteneur (container) Docker ?

grep 'systemd:/system.slice/docker-' /proc/self/cgroup

OpenVZ ?

cat /proc/vz/veinfo

Sinon dmesg ou lsmod, lspci permet souvent de ce faire une idée

2025/03/24 15:06

Notes Virtualbox

Conversion vmdk (vmware) en vdi (VirtualBox)

"c:\program files\oracle\virtualbox\vboxmanage" clonehd SWF73-V1-0-disk1.vmdk new.vdi --format VDI

Si clone de disque même UUID

rem "c:\program files\oracle\virtualbox\vboxmanage" sethduuid plop.vhd
"c:\program files\oracle\virtualbox\vboxmanage" internalcommands sethduuid plop.vhd

Defrag / Compact / Shrink

VBoxManage.exe modifymedium disk F:\VMs\vmdeb1/vmdeb1.vdi --compact

Pb Erreurs

VBOX_E_OBJECT_NOT_FOUND

Solution

"c:\program files\oracle\virtualbox\vboxmanage" internalcommands sethduuid F:\install\VM\plop-clone.vdi
2025/03/24 15:06
blog.txt · Dernière modification : de 127.0.0.1

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki