Remove local copy of infra.lvm_snapshots; Use feature branch version from Github

This commit is contained in:
2025-01-08 17:28:27 -05:00
parent e68d127845
commit 5f13492ee4
66 changed files with 4 additions and 3213 deletions

View File

@ -1,85 +0,0 @@
# bigboot
The `bigboot` role is used to increase boot partition.
The role is designed to support the automation of RHEL in-place upgrades, but can also be used for other purposes.
## Contents
The role configures a dracut pre-mount hook that executes during a reboot to increase the size of the boot partition and filesystem. To make room for the boot size increase, the role first shrinks the size of the next partition after the boot partition. This next partition must contain either an LVM physical volume or a Btrfs filesystem volume. There must be sufficient free space in the LVM volume group or Btrfs filesystem to accommodate the reduced size.
> **WARNING!**
>
> All blocks of the partition above the boot partition are copied using `sfdisk` during the reboot and this can take several minutes or more depending on the size of that partition. The bigboot script periodically outputs progress messages to the system console to make it clear that the system is not in a "hung" state, but these progress messages may not be seen if `rhgb` or `quiet` kernel arguments are set. If the system is reset while the blocks are being copied, the partition will be irrcoverably corrupted. Do not assume the system is hung or force a reset during the bigboot reboot!
To learn more about how bigboot works, check out this [video](https://people.redhat.com/bmader/bigboot-demo.mp4).
## Role Variables
### `bigboot_partition_size` (String)
The variable `bigboot_partition_size` specifies the minimum required size of the boot partition. If the boot partition is already equal to or greater than the given size, the role will end gracefully making no changes. The value can be either in bytes or with optional single letter suffix (1024 bases) using [human_to_bytes](https://docs.ansible.com/ansible/latest/collections/ansible/builtin/human_to_bytes_filter.html) filter plugin.
### `bigboot_size` (String)
This variable is deprecated and will be removed in a future release. Use `bigboot_partition_size` instead.
The variable `bigboot_size` specifies by how much the size of the boot partition is to be increased. The value can be either in bytes or with optional single letter suffix (1024 bases) using [human_to_bytes](https://docs.ansible.com/ansible/latest/collections/ansible/builtin/human_to_bytes_filter.html) filter plugin.
> **Note**
>
> The size increase may be slightly less than the specified value as the role will round down to the nearest multiple of the LVM volume group extent size or Btrfs sector size used for the next partition after the boot partition.
## Example playbook
The following yaml demonstrates an example playbook that runs the role to increase the size of the target hosts boot partition to 1.5G:
```yaml
- name: Extend boot partition playbook
hosts: all
vars:
bigboot_partition_size: 1.5G
roles:
- bigboot
```
# Validate execution
The "Validate boot filesystem new size" task at the end of the run will indicate success or failure of the boot partition size increase. For example:
```
TASK [bigboot : Validate boot filesystem new size] ****************************************
ok: [fedora] => {
"changed": false,
"msg": "Boot filesystem size is now 1.44 GB (503.46 MB increase)"
```
If the boot partition was already equal to or greater than the given size, the bigboot pre-mount hook configuration is skipped and the host will not reboot. In this case, the run will end with the "Validate increase requested" task indicating nothing happened. For example:
```
TASK [bigboot : Validate increase requested] **********************************************
ok: [fedora] => {
"msg": "Nothing to do! Boot partition already equal to or greater than requested size."
}
```
During the reboot, the bigboot pre-mount hook logs progress messages to the console. After the reboot, `journalctl` can be used to review the log output. For example, a successful run will look similar to this:
```bash
# journalctl --boot --unit=dracut-pre-mount
Jul 02 09:40:12 fedora systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook...
Jul 02 09:40:12 fedora dracut-pre-mount[498]: bigboot: Shrinking partition vda3 by 536870912
Jul 02 09:40:12 fedora dracut-pre-mount[498]: bigboot: Moving up partition vda3 by 536870912
Jul 02 09:40:16 fedora dracut-pre-mount[508]: bigboot: Partition move is progressing, please wait! (00:00:01)
Jul 02 09:40:48 fedora dracut-pre-mount[498]: bigboot: Increasing boot partition vda2 by 536870912
Jul 02 09:40:49 fedora dracut-pre-mount[498]: bigboot: Updating kernel partition table
Jul 02 09:40:50 fedora dracut-pre-mount[498]: bigboot: Growing the /boot ext4 filesystem
Jul 02 09:40:50 fedora dracut-pre-mount[528]: e2fsck 1.47.0 (5-Feb-2023)
Jul 02 09:40:50 fedora dracut-pre-mount[528]: Pass 1: Checking inodes, blocks, and sizes
Jul 02 09:40:50 fedora dracut-pre-mount[528]: Pass 2: Checking directory structure
Jul 02 09:40:50 fedora dracut-pre-mount[528]: Pass 3: Checking directory connectivity
Jul 02 09:40:50 fedora dracut-pre-mount[528]: Pass 4: Checking reference counts
Jul 02 09:40:50 fedora dracut-pre-mount[528]: Pass 5: Checking group summary information
Jul 02 09:40:50 fedora dracut-pre-mount[528]: /dev/vda2: 38/65536 files (10.5% non-contiguous), 83665/262144 blocks
Jul 02 09:40:50 fedora dracut-pre-mount[529]: resize2fs 1.47.0 (5-Feb-2023)
Jul 02 09:40:50 fedora dracut-pre-mount[529]: Resizing the filesystem on /dev/vda2 to 393216 (4k) blocks.
Jul 02 09:40:50 fedora dracut-pre-mount[529]: The filesystem on /dev/vda2 is now 393216 (4k) blocks long.
Jul 02 09:40:50 fedora dracut-pre-mount[493]: Boot partition vda2 successfully increased by 536870912 (38 seconds)
```

View File

@ -1,2 +0,0 @@
bigboot_partition_size:
bigboot_size:

View File

@ -1,121 +0,0 @@
#!/bin/bash
#
# This is the new bigboot reboot script. Unlike the old script, this one
# only deals with the partitioning and boot filesystem changes required.
# The preparations to reduce the LVM physical volume or Btrfs filesystem
# volume are now done in advance by Ansible before rebooting.
#
# This script performs the following steps in this order:
#
# 1. Move the end of the next partition to make it smaller
# 2. Use sfdisk to copy the blocks of the next partition
# 3. Move the end of the boot partition making it bigger
# 4. Grow the boot filesystem
#
# Usage: bigboot.sh boot_partition_name next_partition_name boot_size_increase_in_bytes
#
# For example, this command would increase a /boot filesystem on /dev/sda1 by 500M:
#
# bigboot.sh sda1 sda2 524288000
#
# Get input values
boot_part_name="$1"
next_part_name="$2"
boot_size_increase_in_bytes="$3"
# Validate inputs
name="bigboot"
if [[ ! -b "/dev/$boot_part_name" ]]; then
echo "$name: Boot partition is not a block device: $boot_part_name"
exit 1
fi
if [[ ! -b "/dev/$next_part_name" ]]; then
echo "$name: Next partition is not a block device: $next_part_name"
exit 1
fi
if [[ ! $boot_size_increase_in_bytes -gt 0 ]]; then
echo "$name: Invalid size increase value: $boot_size_increase_in_bytes"
exit 1
fi
# Calculate device and partition details
boot_disk_device=/dev/"$(/usr/bin/basename "$(readlink -f /sys/class/block/"$boot_part_name"/..)")"
boot_part_num="$(</sys/class/block/"$boot_part_name"/partition)"
next_part_num="$(</sys/class/block/"$next_part_name"/partition)"
next_part_start="$(($(</sys/class/block/"$next_part_name"/start)*512))"
next_part_size="$(($(</sys/class/block/"$next_part_name"/size)*512))"
next_part_end="$((next_part_start+next_part_size-1))"
next_part_new_end="$((next_part_end-boot_size_increase_in_bytes))"
# Validate boot filesystem
eval "$(/usr/sbin/blkid /dev/"$boot_part_name" -o udev)"
boot_fs_type="$ID_FS_TYPE"
if [[ ! "$boot_fs_type" =~ ^ext[2-4]$|^xfs$ ]]; then
echo "$name: Boot filesystem type is not extendable: $boot_fs_type"
exit 1
fi
# Validate next partition
eval "$(/usr/sbin/blkid /dev/"$next_part_name" -o udev)"
if [[ "$ID_FS_TYPE" == "LVM2_member" ]]; then
eval "$(/usr/sbin/lvm pvs --noheadings --nameprefixes -o vg_name /dev/"$next_part_name")"
next_part_vg="$LVM2_VG_NAME"
fi
# Shrink next partition
echo "$name: Shrinking partition $next_part_name by $boot_size_increase_in_bytes"
if ! ret=$(echo Yes | /usr/sbin/parted "$boot_disk_device" ---pretend-input-tty unit B resizepart "$next_part_num" "$next_part_new_end" 2>&1); then
echo "$name: Failed shrinking partition $next_part_name: $ret"
exit 1
fi
# Output progress messages to help impatient operators recognize the server is not "hung"
( sleep 4
while t="$(ps -C sfdisk -o cputime=)"; do
echo "$name: Partition move is progressing, please wait! ($t)"
sleep 120
done ) &
# Shift next partition
echo "$name: Moving up partition $next_part_name by $boot_size_increase_in_bytes"
if ! ret=$(echo "+$((boot_size_increase_in_bytes/512))," | /usr/sbin/sfdisk --move-data "$boot_disk_device" -N "$next_part_num" --force 2>&1); then
echo "$name: Failed moving up partition $next_part_name: $ret"
exit 1
fi
# Increase boot partition
echo "$name: Increasing boot partition $boot_part_name by $boot_size_increase_in_bytes"
if ! ret=$(echo "- +" | /usr/sbin/sfdisk "$boot_disk_device" -N "$boot_part_num" --no-reread --force 2>&1); then
echo "$name: Failed increasing boot partition $boot_part_name: $ret"
exit 1
fi
# Update kernel partition table
echo "$name: Updating kernel partition table"
[[ "$next_part_vg" ]] && /usr/sbin/lvm vgchange -an "$next_part_vg" && sleep 1
/usr/sbin/partprobe "$boot_disk_device" && sleep 1
[[ "$next_part_vg" ]] && /usr/sbin/lvm vgchange -ay "$next_part_vg" && sleep 1
# Grow the /boot filesystem
echo "$name: Growing the /boot $boot_fs_type filesystem"
if [[ "$boot_fs_type" =~ ^ext[2-4]$ ]]; then
/usr/sbin/e2fsck -fy "/dev/$boot_part_name"
if ! /usr/sbin/resize2fs "/dev/$boot_part_name"; then
echo "$name: resize2fs error while growing the /boot filesystem"
exit 1
fi
fi
if [[ "$boot_fs_type" == "xfs" ]]; then
tmp_dir=$(/usr/bin/mktemp -d)
/usr/bin/mount -t xfs "/dev/$boot_part_name" "$tmp_dir"
/usr/sbin/xfs_growfs "/dev/$boot_part_name"
status=$?
/usr/bin/umount "/dev/$boot_part_name"
if [[ $status -ne 0 ]]; then
echo "$name: xfs_growfs error while growing the /boot filesystem"
exit 1
fi
fi
exit 0

View File

@ -1,15 +0,0 @@
#!/bin/bash
# -*- mode: shell-script; indent-tabs-mode: nil; sh-basic-offset: 4; -*-
# ex: ts=8 sw=4 sts=4 et filetype=sh
check(){
return 0
}
install() {
inst_multiple -o /usr/bin/mount /usr/bin/umount /usr/sbin/parted /usr/bin/mktemp /usr/bin/date /usr/bin/basename /usr/sbin/resize2fs /usr/sbin/partprobe /usr/sbin/lvm /usr/sbin/blkid /usr/sbin/e2fsck /usr/sbin/xfs_growfs /usr/sbin/xfs_db
# shellcheck disable=SC2154
inst_hook pre-mount 99 "$moddir/increase-boot-partition.sh"
inst_binary "$moddir/sfdisk.static" "/usr/sbin/sfdisk"
inst_simple "$moddir/bigboot.sh" "/usr/bin/bigboot.sh"
}

View File

@ -1,14 +0,0 @@
---
galaxy_info:
author: Ygal Blum, Bob Mader
description: Increase the size of the boot partition
company: Red Hat
license: MIT
min_ansible_version: "2.14"
platforms:
- name: EL
versions:
- all
galaxy_tags: []
dependencies: []
...

View File

@ -1,48 +0,0 @@
- name: Copy dracut pre-mount hook files
ansible.builtin.copy:
src: "{{ item }}"
dest: /usr/lib/dracut/modules.d/99extend_boot/
mode: "0554"
loop:
- bigboot.sh
- module-setup.sh
- sfdisk.static
- name: Resolve and copy pre-mount hook wrapper script
ansible.builtin.template:
src: increase-boot-partition.sh.j2
dest: /usr/lib/dracut/modules.d/99extend_boot/increase-boot-partition.sh
mode: '0554'
- name: Create the initramfs and reboot to run the module
vars:
initramfs_add_modules: "extend_boot"
ansible.builtin.include_role:
name: initramfs
- name: Remove dracut extend boot module
ansible.builtin.file:
path: /usr/lib/dracut/modules.d/99extend_boot
state: absent
- name: Retrieve mount points
ansible.builtin.setup:
gather_subset:
- "!all"
- "!min"
- mounts
- name: Capture boot filesystem new size
ansible.builtin.set_fact:
bigboot_boot_fs_new_size: "{{ (ansible_facts.mounts | selectattr('mount', 'equalto', '/boot') | first).size_total | int }}"
- name: Validate boot filesystem new size
ansible.builtin.assert:
that:
- bigboot_boot_fs_new_size != bigboot_boot_fs_original_size
fail_msg: >-
Boot filesystem size '{{ bigboot_boot_fs_new_size }}' did not change
success_msg: >-
Boot filesystem size is now
{{ bigboot_boot_fs_new_size | int | human_readable }}
({{ (bigboot_boot_fs_new_size | int - bigboot_boot_fs_original_size | int) | human_readable }} increase)

View File

@ -1,57 +0,0 @@
- name: Find the boot mount entry
ansible.builtin.set_fact:
bigboot_boot_mount_entry: "{{ ansible_facts.mounts | selectattr('mount', 'equalto', '/boot') | first | default('', true) }}"
- name: Validate boot mount entry
ansible.builtin.assert:
that:
- bigboot_boot_mount_entry.device is defined
fail_msg: "No /boot mount point found."
- name: Calculate the partition to look for
ansible.builtin.set_fact:
bigboot_boot_partition_name: "{{ (bigboot_boot_mount_entry.device | split('/'))[-1] }}"
- name: Find the boot device parent
ansible.builtin.set_fact:
bigboot_boot_disk: "{{ item.key }}"
with_dict: "{{ ansible_facts.devices }}"
when: bigboot_boot_partition_name in item.value.partitions
- name: Capture boot device details
ansible.builtin.set_fact:
bigboot_boot_device_name: "/dev/{{ bigboot_boot_disk }}"
bigboot_boot_fs_original_size: "{{ bigboot_boot_mount_entry.size_total | int }}"
bigboot_boot_device_sectors: "{{ ansible_facts.devices[bigboot_boot_disk].partitions[bigboot_boot_partition_name].sectors | int }}"
bigboot_boot_device_sectorsize: "{{ ansible_facts.devices[bigboot_boot_disk].partitions[bigboot_boot_partition_name].sectorsize | int }}"
- name: Calculate boot device current size
ansible.builtin.set_fact:
bigboot_boot_device_bytes: "{{ bigboot_boot_device_sectors | int * bigboot_boot_device_sectorsize | int }}"
- name: Find the next partition
ansible.builtin.set_fact:
bigboot_next_partition_name: "{{ ansible_loop.nextitem.0 | default(omit, true) }}"
when: item.0 == bigboot_boot_partition_name
loop: "{{ ansible_facts.devices[bigboot_boot_disk].partitions | dictsort }}"
loop_control:
extended: true
- name: Validate next partition exists
ansible.builtin.assert:
that:
- bigboot_next_partition_name is defined
fail_msg: "There is no partition found after the /boot partition."
- name: Find Btrfs or LVM
ansible.builtin.set_fact:
bigboot_next_partition_btrfs: "{{ ansible_facts.mounts | selectattr('device', 'equalto', '/dev/' + bigboot_next_partition_name) |
selectattr('fstype', 'equalto', 'btrfs') | map(attribute='mount') | first | default(omit, true) }}"
bigboot_next_partition_vg: "{{ ansible_facts.lvm.pvs['/dev/' + bigboot_next_partition_name].vg | default(omit, true) }}"
bigboot_next_partition_type_checked: true
- name: Validate next partition type
ansible.builtin.assert:
that:
- bigboot_next_partition_btrfs is defined or bigboot_next_partition_vg is defined
fail_msg: "The partition after the /boot partition is neither LVM or Btrfs."

View File

@ -1,58 +0,0 @@
---
- name: Make sure the required related facts are available
ansible.builtin.setup:
gather_subset:
- "!all"
- "!min"
- mounts
- devices
- name: Validate initramfs preflight
ansible.builtin.include_role:
name: initramfs
tasks_from: preflight
- name: Get boot device info
ansible.builtin.include_tasks:
file: get_boot_device_info.yml
- name: Convert bigboot_partition_size to bytes
ansible.builtin.set_fact:
bigboot_partition_size_bytes: "{{ bigboot_partition_size | ansible.builtin.human_to_bytes }}"
when: bigboot_partition_size | default('', true) | length > 0
- name: Convert bigboot_size to bytes
ansible.builtin.set_fact:
bigboot_size_bytes: "{{ bigboot_size | ansible.builtin.human_to_bytes }}"
when: bigboot_partition_size_bytes is undefined and bigboot_size | default('', true) | length > 0
- name: Calculate bigboot increase
ansible.builtin.set_fact:
bigboot_increase_bytes: "{{ bigboot_partition_size_bytes | default(bigboot_boot_device_bytes, true) | int -
bigboot_boot_device_bytes | int +
bigboot_size_bytes | default('0', true) | int }}"
- name: Prepare Btrfs for bigboot
ansible.builtin.include_tasks:
file: prep_btrfs.yml
when:
- bigboot_increase_bytes | int > 0
- bigboot_next_partition_btrfs is defined
- name: Prepare LVM for bigboot
ansible.builtin.include_tasks:
file: prep_lvm.yml
when:
- bigboot_increase_bytes | int > 0
- bigboot_next_partition_vg is defined
- name: Configure pre-mount hook and reboot
ansible.builtin.include_tasks:
file: do_bigboot_reboot.yml
when:
- bigboot_increase_bytes | int > 0
- name: Validate increase requested
ansible.builtin.debug:
msg: "Nothing to do! Boot partition already equal to or greater than requested size."
when: bigboot_increase_bytes | int <= 0

View File

@ -1,19 +0,0 @@
- name: Find Btrfs sector size
ansible.builtin.slurp:
src: "/sys/fs/btrfs/{{ ansible_facts.mounts | selectattr('mount', 'equalto', bigboot_next_partition_btrfs) | map(attribute='uuid') | first }}/sectorsize"
register: sectorsize
- name: Align bigboot increase to sector size
ansible.builtin.set_fact:
bigboot_increase_bytes: "{{ bigboot_increase_bytes | int - (bigboot_increase_bytes | int % sectorsize.content | b64decode | int) }}"
- name: Btrfs volume reduce
ansible.builtin.command:
cmd: >-
/usr/sbin/btrfs
filesystem resize
1:-{{ bigboot_increase_bytes }}
{{ bigboot_next_partition_btrfs }}
when: bigboot_increase_bytes | int > 0
changed_when: true
register: resize_cmd

View File

@ -1,54 +0,0 @@
- name: Find physical volume size
ansible.builtin.command:
cmd: >-
/usr/sbin/lvm pvs
--noheadings --nosuffix --units b
-o pv_size /dev/{{ bigboot_next_partition_name }}
changed_when: false
register: pv_size
- name: Find volume group extent size
ansible.builtin.command:
cmd: >
/usr/sbin/lvm vgs
--noheadings --nosuffix --units b
-o vg_extent_size {{ bigboot_next_partition_vg }}
changed_when: false
register: vg_extent_size
- name: Align bigboot increase to extent size
ansible.builtin.set_fact:
bigboot_increase_bytes: "{{ bigboot_increase_bytes | int - (bigboot_increase_bytes | int % vg_extent_size.stdout | int) }}"
- name: Test mode pvresize
ansible.builtin.command:
cmd: >-
/usr/sbin/lvm pvresize
--test --yes
--setphysicalvolumesize {{ pv_size.stdout | int - bigboot_increase_bytes | int }}B
/dev/{{ bigboot_next_partition_name }}
when: bigboot_increase_bytes | int > 0
changed_when: false
failed_when: pvresize_test.rc not in [0, 5]
register: pvresize_test
- name: Evict extents from end of physical volume
ansible.builtin.command:
cmd: >-
/usr/sbin/lvm pvmove
--alloc anywhere
/dev/{{ bigboot_next_partition_name }}:{{ (((pv_size.stdout | int - bigboot_increase_bytes | int) / vg_extent_size.stdout | int) - 1) | int }}-
when: pvresize_test.rc | default(0, true) == 5
changed_when: true
register: pvmove
- name: Real pvresize
ansible.builtin.command:
cmd: >-
/usr/sbin/lvm pvresize
--yes
--setphysicalvolumesize {{ pv_size.stdout | int - bigboot_increase_bytes | int }}B
/dev/{{ bigboot_next_partition_name }}
when: bigboot_increase_bytes | int > 0
changed_when: true
register: pvresize_real

View File

@ -1,17 +0,0 @@
#!/bin/bash
main() {
start=$(/usr/bin/date +%s)
# run bigboot.sh to increase boot partition and file system size
sh /usr/bin/bigboot.sh "{{ bigboot_boot_partition_name }}" "{{ bigboot_next_partition_name }}" "{{ bigboot_increase_bytes }}"
status=$?
end=$(/usr/bin/date +%s)
# write the log file
if [[ $status -eq 0 ]]; then
echo "Boot partition {{ bigboot_boot_partition_name }} successfully increased by {{ bigboot_increase_bytes }} ("$((end-start))" seconds)"
else
echo "Failed to extend boot partition ("$((end-start))" seconds)"
fi
}
main "$0" >&2

View File

@ -1,59 +0,0 @@
# initramfs
The `initramfs` role is included by the `shrink_lv` and `bigboot` roles to run an atomic flow of building and using a temporary initramfs in a reboot and restoring the original one.
The role is designed to be internal for this collection and support the automation of RHEL in-place upgrades, but can also be used for other purposes.
## Contents
To allow fast fail, the role provides a [`preflight.yml`](./tasks/preflight.yml) tasks file to be used at the start of the playbook.
Please note that the [`main`](./tasks/main.yml) task file will not run the preflight checks
## Role Variables
All variables are optional
### `initramfs_add_modules`
`initramfs_add_modules` is a a space-separated list of dracut modules to be added to the default set of modules.
See [`dracut`](https://man7.org/linux/man-pages/man8/dracut.8.html) `-a` parameter for details.
### `initramfs_backup_extension`
`initramfs_backup_extension` is the file extension for the backup initramfs file.
Defaults to `old`
### `initramfs_post_reboot_delay`
`initramfs_post_reboot_delay` sets the amount of Seconds to wait after the reboot command was successful before attempting to validate the system rebooted successfully.
The value is used for [`post_reboot_delay`](https://docs.ansible.com/ansible/latest/collections/ansible/builtin/reboot_module.html#parameter-post_reboot_delay) parameter
Defaults to `30`
### `initramfs_reboot_timeout`
`initramfs_reboot_timeout` sets the maximum seconds to wait for machine to reboot and respond to a test command.
The value is used for [`reboot_timeout`](https://docs.ansible.com/ansible/latest/collections/ansible/builtin/reboot_module.html#parameter-reboot_timeout) parameter
Defaults to `7200`
## Example of a playbook to run the role
The following yaml is an example of a playbook that runs the role against a group of hosts named `rhel` and increasing the size of its boot partition by 1G.
The boot partition is automatically retrieved by the role by identifying the existing mounted partition to `/boot` and passing the information to the script using the `kernel_opts`.
```yaml
- name: Extend boot partition playbook
hosts: all
tasks:
- name: Validate initramfs preflight
ansible.builtin.include_role:
name: initramfs
tasks_from: preflight
- name: Create the initramfs and reboot to run the module
vars:
initramfs_add_modules: "my_extra_module"
ansible.builtin.include_role:
name: initramfs
```

View File

@ -1,4 +0,0 @@
initramfs_backup_extension: old
initramfs_add_modules: ""
initramfs_post_reboot_delay: 30
initramfs_reboot_timeout: 7200

View File

@ -1,14 +0,0 @@
---
galaxy_info:
author: Ygal Blum, Bob Mader
description: Included by other roles to run an atomic flow of building and using a temporary initramfs in a reboot and restoring the original one
company: Red Hat
license: MIT
min_ansible_version: "2.14"
platforms:
- name: EL
versions:
- all
galaxy_tags: []
dependencies: []
...

View File

@ -1,32 +0,0 @@
---
- name: Make sure the required related facts are available
ansible.builtin.setup:
gather_subset:
- "!all"
- "!min"
- kernel
- name: Get kernel version
ansible.builtin.set_fact:
initramfs_kernel_version: "{{ ansible_facts.kernel }}"
- name: Create a backup of the current initramfs
ansible.builtin.copy:
remote_src: true
src: /boot/initramfs-{{ initramfs_kernel_version }}.img
dest: /boot/initramfs-{{ initramfs_kernel_version }}.img.{{ initramfs_backup_extension }}
mode: "0600"
- name: Create a new initramfs with the optional additional modules
# yamllint disable-line rule:line-length
ansible.builtin.command: '/usr/bin/dracut {{ ((initramfs_add_modules | length) > 0) | ternary("-a", "") }} "{{ initramfs_add_modules }}" --kver {{ initramfs_kernel_version }} --force'
changed_when: true
- name: Reboot host
ansible.builtin.import_role:
name: verified_reboot
- name: Restore previous initramfs
# yamllint disable-line rule:line-length
ansible.builtin.command: '/usr/bin/mv -f /boot/initramfs-{{ initramfs_kernel_version }}.img.{{ initramfs_backup_extension }} /boot/initramfs-{{ initramfs_kernel_version }}.img'
changed_when: true

View File

@ -1,27 +0,0 @@
---
- name: Make sure the required related facts are available
ansible.builtin.setup:
gather_subset:
- "!all"
- "!min"
- kernel
- name: Get kernel version
ansible.builtin.set_fact:
initramfs_kernel_version: "{{ ansible_facts.kernel }}"
- name: Get default kernel
ansible.builtin.command:
cmd: /sbin/grubby --default-kernel
register: initramfs_grubby_rc
changed_when: false
- name: Parse default kernel version
ansible.builtin.set_fact:
initramfs_default_kernel: "{{ ((((initramfs_grubby_rc.stdout_lines[0] | split('/'))[2] | split('-'))[1:]) | join('-')) | trim }}"
- name: Check the values
ansible.builtin.assert:
that: initramfs_default_kernel == initramfs_kernel_version
fail_msg: "Current kernel version '{{ initramfs_kernel_version }}' is not the default version '{{ initramfs_default_kernel }}'"
success_msg: "Current kernel version {{ initramfs_kernel_version }} and default version {{ initramfs_default_kernel }} match"

View File

@ -1,55 +0,0 @@
# shrink_lv
The `shrink_lv` role is used to decrease the size of logical volumes and the file system within them.
The role is designed to support the automation of RHEL in-place upgrades, but can also be used for other purposes.
## Contents
The role contains the shell scripts to shrink the logical volume and file system, as well as the script wrapping it to run as part of the pre-mount step during the boot process.
## Role Variables
### `shrink_lv_devices`
The variable `shrink_lv_devices` is the list of logical volumes to shrink and the target size for those volumes.
#### `device`
The device that is mounted as listed under `/proc/mount`.
If the same device has multiple paths, e.g. `/dev/vg/lv` and `/dev/mapper/vg/lv` pass the path that is mounted
#### `size`
The target size of the logical volume and filesystem after the role has completed.
The value can be either in bytes or with optional single letter suffix (1024 bases).
See `Unit options` type `iec` of [`numfmt`](https://man7.org/linux/man-pages/man1/numfmt.1.html)
## Example of a playbook to run the role
The following yaml is an example of a playbook that runs the role against all hosts to shrink the logical volume `lv` in volume group `vg` to 4G.
```yaml
- name: Shrink Logical Volumes playbook
hosts: all
vars:
shrink_lv_devices:
- device: /dev/vg/lv
size: 4G
roles:
- shrink_lv
```
# Validate execution
The script will add an entry to the kernel messages (`/dev/kmsg` or `/var/log/messages`) with success or failure.
In case of failure, it may also include an error message retrieved from the execution of the script.
A successful execution will look similar to this:
```bash
[root@localhost ~]# cat /var/log/messages |grep Resizing -A 2 -B 2
Oct 16 17:55:00 localhost /dev/mapper/rhel-root: 29715/2686976 files (0.2% non-contiguous), 534773/10743808 blocks
Oct 16 17:55:00 localhost dracut-pre-mount: resize2fs 1.42.9 (28-Dec-2013)
Oct 16 17:55:00 localhost journal: Resizing the filesystem on /dev/mapper/rhel-root to 9699328 (4k) blocks.#012The filesystem on /dev/mapper/rhel-root is now 9699328 blocks long.
Oct 16 17:55:00 localhost journal: Size of logical volume rhel/root changed from 40.98 GiB (10492 extents) to 37.00 GiB (9472 extents).
Oct 16 17:55:00 localhost journal: Logical volume rhel/root successfully resized.
```

View File

@ -1 +0,0 @@
shrink_lv_backup_extension: old

View File

@ -1,14 +0,0 @@
#!/bin/bash
# -*- mode: shell-script; indent-tabs-mode: nil; sh-basic-offset: 4; -*-
# ex: ts=8 sw=4 sts=4 et filetype=sh
check(){
return 0
}
install() {
inst_multiple -o /usr/bin/numfmt /usr/bin/findmnt /usr/bin/lsblk /usr/sbin/lvm /usr/bin/awk /usr/bin/sed /usr/bin/sort /usr/bin/mktemp /usr/bin/date /usr/bin/head /usr/sbin/blockdev /usr/sbin/tune2fs /usr/sbin/resize2fs /usr/bin/cut /usr/sbin/fsadm /usr/sbin/fsck.ext4 /usr/libexec/lvresize_fs_helper /usr/sbin/cryptsetup /usr/bin/logger /usr/bin/basename /usr/bin/getopt
# shellcheck disable=SC2154
inst_hook pre-mount 99 "$moddir/shrink-start.sh"
inst_simple "$moddir/shrink.sh" "/usr/bin/shrink.sh"
}

View File

@ -1,253 +0,0 @@
#!/bin/bash
VOLUME_SIZE_ALIGNMENT=4096
function get_device_name() {
if [[ "$1" == "UUID="* ]]; then
dev_name=$( parse_uuid "$1" )
else
dev_name=$(/usr/bin/cut -d " " -f 1 <<< "$1")
fi
status=$?
if [[ status -ne 0 ]]; then
return $status
fi
echo "$dev_name"
return $status
}
function ensure_size_in_bytes() {
local expected_size
expected_size=$(/usr/bin/numfmt --from iec "$1")
(( expected_size=(expected_size+VOLUME_SIZE_ALIGNMENT)/VOLUME_SIZE_ALIGNMENT*VOLUME_SIZE_ALIGNMENT ))
echo $expected_size
}
function is_device_mounted() {
/usr/bin/findmnt --source "$1" 1>&2>/dev/null
status=$?
if [[ status -eq 0 ]]; then
echo "Device $1 is mounted" >&2
return 1
fi
return 0
}
function get_current_volume_size() {
val=$(/usr/bin/lsblk -b "$1" -o SIZE --noheadings)
status=$?
if [[ $status -ne 0 ]]; then
return $status
fi
echo "$val"
return 0
}
function is_lvm(){
val=$( /usr/bin/lsblk "$1" --noheadings -o TYPE 2>&1)
status=$?
if [[ status -ne 0 ]]; then
echo "Failed to list block device properties for $2: $val" >&2
return 1
fi
if [[ "$val" != "lvm" ]]; then
echo "Device $device_name is not of lvm type" >&2
return 1
fi
return 0
}
function parse_uuid() {
uuid=$(/usr/bin/awk '{print $1}'<<< "$1"|/usr/bin/awk -F'UUID=' '{print $2}')
val=$(/usr/bin/lsblk /dev/disk/by-uuid/"$uuid" -o NAME --noheadings 2>/dev/null)
status=$?
if [[ $status -ne 0 ]]; then
echo "Failed to retrieve device name for UUID=$uuid" >&2
return $status
fi
echo "/dev/mapper/$val"
return 0
}
function shrink_volume() {
/usr/sbin/lvm lvreduce "$NOLOCKING" --resizefs -L "$2"b "$1"
return $?
}
function check_volume_size() {
current_size=$(get_current_volume_size "$1")
if [[ $current_size -lt $2 ]];then
echo "Current volume size for device $1 ($current_size bytes) is lower to expected $2 bytes" >&2
return 1
fi
if [[ $current_size -eq $2 ]]; then
echo "Current volume size for device $1 already equals $2 bytes" >&2
return 1
fi
return $?
}
function convert_size_to_fs_blocks(){
local device=$1
local size=$2
block_size_in_bytes=$(/usr/sbin/tune2fs -l "$device" | /usr/bin/awk '/Block size:/{print $3}')
echo $(( size / block_size_in_bytes ))
}
function calculate_expected_resized_file_system_size_in_blocks(){
local device=$1
increment_boot_partition_in_blocks=$(convert_size_to_fs_blocks "$device" "$INCREMENT_BOOT_PARTITION_SIZE_IN_BYTES")
total_block_count=$(/usr/sbin/tune2fs -l "$device" | /usr/bin/awk '/Block count:/{print $3}')
new_fs_size_in_blocks=$(( total_block_count - increment_boot_partition_in_blocks ))
echo $new_fs_size_in_blocks
}
function check_filesystem_size() {
local device=$1
local new_fs_size_in_blocks=$2
new_fs_size_in_blocks=$(calculate_expected_resized_file_system_size_in_blocks "$device")
# it is possible that running this command after resizing it might give an even smaller number.
minimum_blocks_required=$(/usr/sbin/resize2fs -P "$device" 2> /dev/null | /usr/bin/awk '{print $NF}')
if [[ "$new_fs_size_in_blocks" -le "0" ]]; then
echo "Unable to shrink volume: New size is 0 blocks"
return 1
fi
if [[ $minimum_blocks_required -gt $new_fs_size_in_blocks ]]; then
echo "Unable to shrink volume: Estimated minimum size of the file system $1 ($minimum_blocks_required blocks) is greater than the new size $new_fs_size_in_blocks blocks" >&2
return 1
fi
return 0
}
function process_entry() {
is_lvm "$1" "$3"
status=$?
if [[ $status -ne 0 ]]; then
return "$status"
fi
expected_size_in_bytes=$(ensure_size_in_bytes "$2")
check_filesystem_size "$1" "$expected_size_in_bytes"
status=$?
if [[ $status -ne 0 ]]; then
return "$status"
fi
check_volume_size "$1" "$expected_size_in_bytes"
status=$?
if [[ $status -ne 0 ]]; then
return "$status"
fi
is_device_mounted "$1"
status=$?
if [[ $status -ne 0 ]]; then
return "$status"
fi
shrink_volume "$1" "$expected_size_in_bytes"
return $?
}
function display_help() {
echo "Program to shrink ext4 file systems hosted in Logical Volumes.
Usage: '$(basename "$0")' [-h] [-d=|--device=]
Example:
where:
-h show this help text
-d|--device= name or UUID of the device that holds an ext4 and the new size separated by a ':'
for example /dev/my_group/my_vol:2G
Sizes will be rounded to be 4K size aligned"
}
function parse_flags() {
for i in "$@"
do
case $i in
-d=*|--device=*)
entries+=("${i#*=}")
;;
-h)
display_help
exit 0
;;
*)
# unknown option
echo "Unknown flag $i"
display_help
exit 1
;;
esac
done
if [[ ${#entries[@]} == 0 ]]; then
display_help
exit 0
fi
}
function parse_entry() {
IFS=':'
read -ra strarr <<< "$1"
if [[ ${#strarr[@]} != 2 ]]; then
echo "Invalid device entry $1"
display_help
return 1
fi
device="${strarr[0]}"
expected_size="${strarr[1]}"
}
function get_nolocking_opts() {
local lvm_version
lvm_version="$(/usr/sbin/lvm version | /usr/bin/grep 'LVM version:')"
status=$?
if [[ $status -ne 0 ]]; then
echo "Error getting LVM version '$lvm_version'"
exit $status
fi
# true when LVM version is older than 2.03
if echo -e "${lvm_version##*:}\n2.03" | /usr/bin/sed 's/^ *//' | /usr/bin/sort -V -C; then
NOLOCKING='--config=global{locking_type=0}'
else
NOLOCKING='--nolocking'
fi
}
function main() {
local -a entries=()
local run_status=0
parse_flags "$@"
get_nolocking_opts
for entry in "${entries[@]}"
do
local device
local expected_size
parse_entry "$entry"
status=$?
if [[ $status -ne 0 ]]; then
run_status=$status
continue
fi
device_name=$( get_device_name "$device" )
status=$?
if [[ $status -ne 0 ]]; then
run_status=$status
continue
fi
process_entry "$device_name" "$expected_size" "$device"
status=$?
if [[ $status -ne 0 ]]; then
run_status=$status
fi
done
exit $run_status
}
main "$@"

View File

@ -1,14 +0,0 @@
---
galaxy_info:
author: Ygal Blum, Bob Mader
description: Decrease logical volume size along with the filesystem
company: Red Hat
license: MIT
min_ansible_version: "2.14"
platforms:
- name: EL
versions:
- all
galaxy_tags: []
dependencies: []
...

View File

@ -1,20 +0,0 @@
- name: Get the mount point info
ansible.builtin.set_fact:
shrink_lv_mount_info: "{{ ansible_facts['mounts'] | selectattr('device', 'equalto', item['device']) | first }}"
- name: Assert that the mount point exists
ansible.builtin.assert:
that: shrink_lv_mount_info['device'] is defined
fail_msg: "Mount point {{ item['device'] }} does not exist"
- name: Assert that the filesystem is supported
ansible.builtin.assert:
that: shrink_lv_mount_info['fstype'] in ['ext4']
fail_msg: "Unsupported filesystem '{{ shrink_lv_mount_info['fstype'] }}' on '{{ item['device'] }}'"
- name: Assert that the filesystem has enough free space
ansible.builtin.assert:
that: shrink_lv_mount_info['block_size'] * shrink_lv_mount_info['block_used'] < (item['size'] | ansible.builtin.human_to_bytes)
fail_msg: >
Requested size {{ item['size'] }} is smaller than currently used
{{ (shrink_lv_mount_info['block_size'] * shrink_lv_mount_info['block_used']) | ansible.builtin.human_readable }}

View File

@ -1,13 +0,0 @@
---
- name: Set device for mount
ansible.builtin.set_fact:
shrink_lv_set_device: "{{ ansible_facts['mounts'] | selectattr('device', 'equalto', item['device']) | first }}"
- name: Assert that the filesystem has shrunk
ansible.builtin.assert:
# yamllint disable-line rule:line-length
that: (shrink_lv_set_device['size_total'] | int) <= (item['size'] | ansible.builtin.human_to_bytes)
fail_msg: >
Logical Volume {{ item['device'] }} was NOT shrunk as requested.
success_msg: >
Logical Volume {{ item['device'] }} has been shrunk as requested.

View File

@ -1,48 +0,0 @@
---
- name: Make sure the required facts are available
ansible.builtin.setup:
gather_subset:
- "!all"
- "!min"
- kernel
- mounts
- name: Run preflight checks
ansible.builtin.include_tasks: preflight.yaml
- name: Copy shrink LV dracut module
ansible.builtin.copy:
src: "{{ item }}"
dest: /usr/lib/dracut/modules.d/99shrink_lv/
mode: "0554"
loop:
- module-setup.sh
- shrink.sh
- name: Resolve and copy the shrink-start script
ansible.builtin.template:
src: shrink-start.sh.j2
dest: /usr/lib/dracut/modules.d/99shrink_lv/shrink-start.sh
mode: '0554'
- name: Create the initramfs and reboot to run the module
vars:
initramfs_add_modules: "shrink_lv lvm"
ansible.builtin.include_role:
name: initramfs
- name: Remove dracut extend boot module
ansible.builtin.file:
path: /usr/lib/dracut/modules.d/99shrink_lv
state: absent
- name: Retrieve mount points
ansible.builtin.setup:
gather_subset:
- "!all"
- "!min"
- mounts
- name: Check if device has shrunken successfully
ansible.builtin.include_tasks: check_if_shrunk.yml
loop: "{{ shrink_lv_devices }}"

View File

@ -1,17 +0,0 @@
---
- name: Assert shrink_lv_devices
ansible.builtin.assert:
that:
- shrink_lv_devices is defined
- shrink_lv_devices | type_debug == "list"
- shrink_lv_devices | length > 0
fail_msg: shrink_lv_devices must be a list and include at least one element
- name: Validate initramfs preflight
ansible.builtin.include_role:
name: initramfs
tasks_from: preflight
- name: Check all devices
ansible.builtin.include_tasks: check_device.yaml
loop: "{{ shrink_lv_devices }}"

View File

@ -1,14 +0,0 @@
#!/bin/bash
activate_volume_groups(){
for vg in `/usr/sbin/lvm vgs -o name --noheading 2>/dev/null`; do
/usr/sbin/lvm vgchange -ay $vg
done
}
main() {
activate_volume_groups
/usr/bin/shrink.sh {% for device in shrink_lv_devices %}--device={{ device.device }}:{{ device.size }} {% endfor %} 1>&2 >/dev/kmsg
}
main "$0"

View File

@ -1,84 +0,0 @@
# snapshot_create role
The `snapshot_create` role is used to control the creation for a defined set of LVM snapshot volumes.
In addition, it can optionally save the Grub configuration and image files under /boot and configure settings to enable the LVM snapshot autoextend capability.
The role will verify free space and should fail if there is not enough or if any snapshots already exist for the given `snapshot_create_set_name`.
The role is designed to support the automation of RHEL in-place upgrades, but can also be used to reduce the risk of more mundane system maintenance activities.
## Role Variables
### `snapshot_create_check_only`
When set to `true` the role will only verify there is enough free space for the specified snapshots and not create them.
Default `false`
### `snapshot_create_set_name`
The variable `snapshot_create_set_name` is used to identify the list of volumes to be operated upon.
The role will use the following naming convention when creating the snapshots:
`<Origin LV name>_<snapshot_create_set_name>`
### `snapshot_create_boot_backup`
Boolean to specify that the role should preserve the Grub configuration and image files under /boot required for booting the default kernel.
The files are preserved in a compressed tar archive at `/root/boot-backup-<snapshot_create_set_name>.tgz`. Default is `false`.
> **Warning**
>
> When automating RHEL in-place upgrades, do not perform a Grub to Grub2 migration as part of your upgrade playbook. It will invalidate your boot backup and cause a subsequent `revert` action to fail. For example, if you are using the [`upgrade`](https://github.com/redhat-cop/infra.leapp/tree/main/roles/upgrade#readme) role from the [`infra.leapp`](https://github.com/redhat-cop/infra.leapp) collection, do not set `update_grub_to_grub_2` to `true`. Grub to Grub2 migration should only be performed after the `remove` action has been performed to delete the snapshots and boot backup.
### `snapshot_create_snapshot_autoextend_threshold`
Configure the given `snapshot_create_autoextend_threshold` setting in lvm.conf before creating snapshots.
### `snapshot_create_snapshot_autoextend_percent`
Configure the given `snapshot_create_snapshot_autoextend_percent` setting in lvm.conf before creating snapshots.
### `snapshot_create_volumes`
This is the list of logical volumes for which snapshots are to be created and the size requirements for those snapshots. The volumes list is only required when the role is run with the check or create action.
### `vg`
The volume group of the origin logical volume for which a snapshot will be created.
### `lv`
The origin logical volume for which a snapshot will be created.
### `size`
The size of the logical volume according to the definition of the
[size](https://docs.ansible.com/ansible/latest/collections/community/general/lvol_module.html#parameter-size)
parameter of the `community.general.lvol` module.
To create thin provisioned snapshot of a thin provisioned volume, omit the `size` parameter or set it to `0`
## Example Playbooks
Perform space check and fail of there will not be enough space for all the snapshots in the set.
If there is sufficient space, proceed to create snapshots for the listed logical volumes.
Each snapshot will be sized to 20% of the origin volume size.
Snapshot autoextend settings are configured to enable free space in the volume group to be allocated to any snapshot that may exceed 70% usage in the future.
Files under /boot will be preserved.
```yaml
- hosts: all
roles:
- name: snapshot_create
snapshot_create_set_name: ripu
snapshot_create_snapshot_autoextend_threshold: 70
snapshot_create_snapshot_autoextend_percent: 20
snapshot_create_boot_backup: true
snapshot_create_volumes:
- vg: rootvg
lv: root
size: 2G
- vg: rootvg
lv: var
size: 2G
```

View File

@ -1,2 +0,0 @@
snapshot_create_volumes: []
snapshot_create_boot_backup: false

View File

@ -1,238 +0,0 @@
'''
Check is there is enough space to created all the requested snapshots
The input should be a json string array.
Each element should have the following keys:
- vg: Name of the volume group
- lv: Name of the Logical Volume
- size: The size of the requested snapshot.
Follow (https://docs.ansible.com/ansible/latest/collections/community/general/lvol_module.html#parameter-size)
without support for sign
'''
import argparse
import json
import math
import os
import subprocess
import sys
_VGS_COMMAND = '/usr/sbin/vgs'
_LVS_COMMAND = '/usr/sbin/lvs'
_EXIT_CODE_SUCCESS = 0
_EXIT_CODE_VOLUME_GROUP_SPACE = 1
_EXIT_CODE_FILE_SYSTEM_TYPE = 2
_EXIT_CODE_VOLUME_SPACE = 3
_supported_filesystems = [
'',
'ext2',
'ext3',
'ext4'
]
class CheckException(Exception):
""" Exception wrapper """
parser = argparse.ArgumentParser()
parser.add_argument('that', help='What should the script check', type=str, choices=['snapshots', 'resize'])
parser.add_argument('volumes', help='Volumes JSON array in a string', type=str)
def _main():
args = parser.parse_args()
try:
volumes = json.loads(args.volumes)
except json.decoder.JSONDecodeError:
print("Provided volume list '{volumes}' it not a valid json string".format(volumes=sys.argv[1]))
sys.exit(1)
groups_names = set(vol['vg'] for vol in volumes)
groups_info = {
group: _get_group_info(group) for group in groups_names
}
for vol in volumes:
vol['normalized_size'] = _calc_requested_size(groups_info[vol["vg"]], vol)
groups_info[vol["vg"]]['requested_size'] += vol['normalized_size']
if args.that == 'snapshots':
exit_code = _check_free_size_for_snapshots(groups_info)
if exit_code == _EXIT_CODE_SUCCESS:
norm_vols = [
{
'vg': vol['vg'],
'lv': vol['lv'],
'size': "{size}B".format(size=vol['normalized_size'])
} for vol in volumes
]
print(json.dumps(norm_vols))
if args.that == 'resize':
exit_code = _check_free_size_for_resize(volumes, groups_info)
sys.exit(exit_code)
def _check_free_size_for_snapshots(groups_info):
return _check_requested_size(groups_info, 'free')
def _check_free_size_for_resize(volumes, groups_info):
exit_code = _check_requested_size(groups_info, 'size')
if exit_code != _EXIT_CODE_SUCCESS:
return exit_code
mtab = _parse_mtab()
for volume in volumes:
mtab_entry = mtab.get("/dev/mapper/{vg}-{lv}".format(vg=volume['vg'], lv=volume['lv']))
volume['fs_type'] = mtab_entry['type'] if mtab_entry else ''
volume['fs_size'] = _calc_filesystem_size(mtab_entry) if mtab_entry else 0
filesystems_supported = all(volume['fs_type'] in _supported_filesystems for volume in volumes)
if not filesystems_supported:
exit_code = _EXIT_CODE_FILE_SYSTEM_TYPE
enough_space = all(vol['normalized_size'] > vol['fs_size'] for vol in volumes)
if not enough_space:
exit_code = _EXIT_CODE_VOLUME_SPACE
if exit_code != _EXIT_CODE_SUCCESS:
print(json.dumps(_to_printable_volumes(volumes)))
return exit_code
def _check_requested_size(groups_info, group_field):
enough_space = all(group['requested_size'] <= group[group_field] for _, group in groups_info.items())
if not enough_space:
print(json.dumps(groups_info))
return _EXIT_CODE_VOLUME_GROUP_SPACE
return _EXIT_CODE_SUCCESS
def _get_group_info(group):
group_info_str = subprocess.check_output([_VGS_COMMAND, group, '-v', '--units', 'b', '--reportformat', 'json'])
group_info_json = json.loads(group_info_str)
group_info = group_info_json['report'][0]['vg'][0]
return {
'name': group,
'size': _get_size_from_report(group_info['vg_size']),
'free': _get_size_from_report(group_info['vg_free']),
'extent_size': _get_size_from_report(group_info['vg_extent_size']),
'requested_size': 0
}
def _calc_requested_size(group_info, volume):
unit = 'm'
requested_size = volume.get('size', 0)
if requested_size == 0:
# handle thin provisioning
pass
if isinstance(requested_size, int) or isinstance(requested_size, float):
size = requested_size
else:
parts = requested_size.split('%')
if len(parts) == 2:
unit = 'b'
percent = float(parts[0])
percent_of = parts[1]
if percent_of == 'VG':
size = group_info['size'] * percent / 100
elif percent_of == 'FREE':
size = group_info['free'] * percent / 100
elif percent_of == 'ORIGIN':
origin_size = _get_volume_size(volume)
size = origin_size * percent / 100
else:
raise CheckException("Unsupported base type {base_type}".format(base_type=percent_of))
else:
try:
size = float(requested_size[:-1])
unit = requested_size[-1].lower()
except ValueError:
raise CheckException('Failed to read requested size {size}'.format(size=requested_size))
return _align_to_extent(_convert_to_bytes(size, unit), group_info['extent_size'])
def _get_volume_size(vol):
volume_info_str = subprocess.check_output(
[_LVS_COMMAND, "{vg}/{lv}".format(vg=vol['vg'], lv=vol['lv']), '-v', '--units', 'b', '--reportformat', 'json']
)
volume_info_json = json.loads(volume_info_str)
volume_info = volume_info_json['report'][0]['lv'][0]
return _get_size_from_report(volume_info['lv_size'])
def _get_size_from_report(reported_size):
try:
size = float(reported_size)
unit = 'm'
except ValueError:
if reported_size[0] == '<':
reported_size = reported_size[1:]
size = float(reported_size[:-1])
unit = reported_size[-1].lower()
return _convert_to_bytes(size, unit)
def _align_to_extent(size, extent_size):
return math.ceil(size / extent_size) * extent_size
def _calc_filesystem_size(mtab_entry):
fs_stat = os.statvfs(mtab_entry['mount_point'])
return (fs_stat.f_blocks - fs_stat.f_bfree) * fs_stat.f_bsize
def _parse_mtab():
mtab = {}
with open('/etc/mtab') as f:
for m in f:
fs_spec, fs_file, fs_vfstype, _fs_mntops, _fs_freq, _fs_passno = m.split()
mtab[fs_spec] = {
'mount_point': fs_file,
'type': fs_vfstype
}
return mtab
def _convert_to_bytes(size, unit):
convertion_table = {
'b': 1024 ** 0,
'k': 1024 ** 1,
'm': 1024 ** 2,
'g': 1024 ** 3,
't': 1024 ** 4,
'p': 1024 ** 5,
'e': 1024 ** 6,
}
return size * convertion_table[unit]
def _convert_to_unit_size(bytes):
units = ['b', 'k', 'm', 'g', 't', 'p', 'e']
i = 0
while bytes >= 1024:
i += 1
bytes /= 1024
# Round down bytes to two digits
bytes = math.floor(bytes * 100) / 100
return "{size}{unit}".format(size=bytes, unit=units[i])
def _to_printable_volumes(volumes):
return {
volume['vg'] + "_" + volume['lv']: {
'file_system_type': volume['fs_type'],
'used': _convert_to_unit_size(volume['fs_size']),
'requested_size': _convert_to_unit_size(volume['normalized_size'])
} for volume in volumes
}
if __name__ == '__main__':
_main()

View File

@ -1,14 +0,0 @@
---
galaxy_info:
author: Ygal Blum, Bob Mader
description: Create a defined set of LVM snapshot volumes
company: Red Hat
license: MIT
min_ansible_version: "2.14"
platforms:
- name: EL
versions:
- all
galaxy_tags: []
dependencies: []
...

View File

@ -1,26 +0,0 @@
- name: Verify that all volumes exist
ansible.builtin.include_tasks: verify_volume_exists.yml
loop: "{{ snapshot_create_volumes }}"
- name: Verify that there are no existing snapshots
ansible.builtin.include_tasks: verify_no_existing_snapshot.yml
loop: "{{ snapshot_create_volumes }}"
- name: Verify that there is enough storage space
ansible.builtin.script: check.py snapshots '{{ snapshot_create_volumes | to_json }}'
args:
executable: "{{ ansible_python.executable }}"
register: snapshot_create_check_status
failed_when: false
changed_when: false
- name: Store check return in case of failure
ansible.builtin.set_fact:
snapshot_create_check_failure_json: "{{ snapshot_create_check_status.stdout | from_json }}"
when: snapshot_create_check_status.rc != 0
- name: Assert results
ansible.builtin.assert:
that: snapshot_create_check_status.rc == 0
fail_msg: Not enough space in the Volume Groups to create the requested snapshots
success_msg: The Volume Groups have enough space to create the requested snapshots

View File

@ -1,68 +0,0 @@
- name: Update lvm configuration
block:
- name: Stringify snapshot_autoextend_percent setting
ansible.builtin.set_fact:
snapshot_create_snapshot_autoextend_percent_config: "activation/snapshot_autoextend_percent={{ snapshot_create_snapshot_autoextend_percent }}"
when: snapshot_create_snapshot_autoextend_percent is defined
- name: Stringify snapshot_autoextend_threshold setting
ansible.builtin.set_fact:
snapshot_create_snapshot_autoextend_threshold_config: "activation/snapshot_autoextend_threshold={{ snapshot_create_snapshot_autoextend_threshold }}"
when: snapshot_create_snapshot_autoextend_threshold is defined
- name: Stringify the new config
ansible.builtin.set_fact:
snapshot_create_new_lvm_config: >
{{ snapshot_create_snapshot_autoextend_percent_config | default('') }}
{{ snapshot_create_snapshot_autoextend_threshold_config | default('') }}
- name: Set LVM configuration
ansible.builtin.command: 'lvmconfig --mergedconfig --config "{{ snapshot_create_new_lvm_config }}" --file /etc/lvm/lvm.conf'
changed_when: true
when: ((snapshot_create_new_lvm_config | trim) | length) > 0
- name: Check for grubenv saved_entry
ansible.builtin.lineinfile:
name: /boot/grub2/grubenv
regexp: ^saved_entry=
state: absent
check_mode: true
changed_when: false
failed_when: false
register: snapshot_create_grubenv
- name: Add grubenv saved_entry
ansible.builtin.shell: 'grubby --set-default-index=$(grubby --default-index)'
changed_when: true
when: snapshot_create_grubenv.found is defined and snapshot_create_grubenv.found == 0
- name: Create snapshots
community.general.lvol:
vg: "{{ item.vg }}"
lv: "{{ item.lv }}"
snapshot: "{{ item.lv }}_{{ snapshot_create_set_name }}"
size: "{{ item.size | default(omit) }}"
loop: "{{ snapshot_create_volumes }}"
- name: Required packages are present
ansible.builtin.package:
name:
- gzip
- tar
state: present
- name: Create boot backup
community.general.archive:
format: gz
mode: '0644'
dest: "/root/boot-backup-{{ snapshot_create_set_name }}.tgz"
path:
- "/boot/initramfs-{{ ansible_kernel }}.img"
- "/boot/vmlinuz-{{ ansible_kernel }}"
- "/boot/System.map-{{ ansible_kernel }}"
- "/boot/symvers-{{ ansible_kernel }}.gz"
- "/boot/config-{{ ansible_kernel }}"
- "/boot/.vmlinuz-{{ ansible_kernel }}.hmac"
- /boot/grub/grub.conf
- /boot/grub2/grub.cfg
- /boot/grub2/grubenv
- /boot/loader/entries
- /boot/efi/EFI/redhat/grub.cfg
when: snapshot_create_boot_backup

View File

@ -1,8 +0,0 @@
- name: Check available disk space
ansible.builtin.include_tasks: check.yml
- name: Create Snapshot
vars:
snapshot_create_volumes: "{{ snapshot_create_check_status.stdout | from_json }}"
ansible.builtin.include_tasks: create.yml
when: not (snapshot_create_check_only | default(false))

View File

@ -1,21 +0,0 @@
- name: Run lvs
ansible.builtin.command: >
lvs
--select 'vg_name = {{ item.vg }}
&& origin = {{ item.lv }}
&& lv_name = {{ item.lv }}_{{ snapshot_create_set_name }}'
--reportformat json
register: snapshot_create_lvs_response
changed_when: false
- name: Parse report
ansible.builtin.set_fact:
snapshot_create_lv_snapshot_report_array: "{{ (snapshot_create_lvs_response.stdout | from_json).report[0].lv }}"
- name: Verify that the no snapshot exists for the volume
ansible.builtin.assert:
that: (snapshot_create_lv_snapshot_report_array | length) == 0
fail_msg: >
The volume '{{ item.lv }}' in volume group '{{ item.vg }}'
already has at least one snapshot
'{{ snapshot_create_lv_snapshot_report_array[0].lv_name | default('none') }}'

View File

@ -1,9 +0,0 @@
- name: Run lvs
ansible.builtin.command: "lvs --select 'vg_name = {{ item.vg }} && lv_name = {{ item.lv }}' --reportformat json"
register: snapshot_create_lvs_response
changed_when: false
- name: Verify that the volume was found
ansible.builtin.assert:
that: (((snapshot_create_lvs_response.stdout | from_json).report[0].lv) | length) > 0
fail_msg: "Could not find volume '{{ item.lv }}' in volume group '{{ item.vg }}'"

View File

@ -1,32 +0,0 @@
# snapshot_remove role
The `snapshot_remove` role is used to remove snapshots.
In addition, it removes the Grub configuration and image files under /boot if it was previously backed up
It is intended to be used along with the `snapshot_create` role.
The role is designed to support the automation of RHEL in-place upgrades, but can also be used to reduce the risk of more mundane system maintenance activities.
## Role Variables
### `snapshot_remove_set_name`
The variable `snapshot_remove_set_name` is used to identify the list of volumes to be operated upon.
The role will use the following naming convention when reverting the snapshots:
`<Origin LV name>_<snapshot_remove_set_name>`
This naming convention will be used to identify the snapshots to be removed.
## Example Playbooks
### Commit
A commit playbook is used when users are comfortable the snapshots are not needed any longer.
Each snapshot in the snapshot set is removed and the backed up image files from /boot are deleted.
```yaml
- hosts: all
roles:
- name: snapshot_remove
snapshot_remove_set_name: ripu
```

View File

@ -1,14 +0,0 @@
---
galaxy_info:
author: Ygal Blum, Bob Mader
description: Remove snapshots previously created using the snapshot_create role
company: Red Hat
license: MIT
min_ansible_version: "2.14"
platforms:
- name: EL
versions:
- all
galaxy_tags: []
dependencies: []
...

View File

@ -1,23 +0,0 @@
- name: Calculate the list of snapshots
block:
- name: Get list of volumes
ansible.builtin.command: "lvs --select 'lv_name =~ {{ snapshot_remove_set_name }}$ && origin != \"\"' --reportformat json "
register: snapshot_remove_lvs_response
changed_when: false
- name: Get LV dict List
ansible.builtin.set_fact:
snapshot_remove_snapshots: "{{ (snapshot_remove_lvs_response.stdout | from_json).report[0].lv }}"
- name: Remove snapshots
community.general.lvol:
state: absent
vg: "{{ item.vg_name }}"
lv: "{{ item.origin }}"
snapshot: "{{ item.lv_name }}"
force: true
loop: "{{ snapshot_remove_snapshots }}"
- name: Remove boot backup
ansible.builtin.file:
path: "/root/boot-backup-{{ snapshot_remove_set_name }}.tgz"
state: absent

View File

@ -1,36 +0,0 @@
# snapshot_revert role
The `snapshot_revert` role is used to merge snapshots to origin and reboot (i.e., rollback).
The role will verify that all snapshots in the set are still in active state before doing any merges.
This is to prevent rolling back if any snapshots have become invalidated in which case the role should fail.
In addition, it restores the Grub configuration and image files under /boot is it was previously backed up
It is intended to be used along with the `snapshot_create` role.
The role is designed to support the automation of RHEL in-place upgrades, but can also be used to reduce the risk of more mundane system maintenance activities.
## Role Variables
### `snapshot_revert_set_name`
The variable `snapshot_revert_set_name` is used to identify the list of volumes to be operated upon.
The role will use the following naming convention when reverting the snapshots:
`<Origin LV name>_<snapshot_revert_set_name>`
This naming convention will be used to identify the snapshots to be merged.
The `revert` action will verify that all snapshots in the set are still active state before doing any merges. This is to prevent rolling back if any snapshots have become invalidated in which case the `revert` action should fail.
## Example Playbooks
This playbook rolls back the host using the snapshots created using the `snapshot_create` role.
After verifying that all snapshots are still valid, each logical volume in the snapshot set is merged.
The image files under /boot will be restored and then the host will be rebooted.
```yaml
- hosts: all
roles:
- name: snapshot_revert
snapshot_revert_set_name: ripu
```

View File

@ -1,14 +0,0 @@
---
galaxy_info:
author: Ygal Blum, Bob Mader
description: Revert to snapshots previously created using the snapshot_create role
company: Red Hat
license: MIT
min_ansible_version: "2.14"
platforms:
- name: EL
versions:
- all
galaxy_tags: []
dependencies: []
...

View File

@ -1,73 +0,0 @@
- name: Calculate the list of snapshots
block:
- name: Get list of volumes
ansible.builtin.command: "lvs --select 'lv_name =~ {{ snapshot_revert_set_name }}$ && origin != \"\"' --reportformat json "
register: snapshot_revert_lvs_response
changed_when: false
- name: Get LV dict List
ansible.builtin.set_fact:
snapshot_revert_snapshots: "{{ (snapshot_revert_lvs_response.stdout | from_json).report[0].lv }}"
- name: Verify that all snapshots are active
ansible.builtin.include_tasks: verify_snapshot_active.yml
loop: "{{ snapshot_revert_snapshots }}"
- name: Required packages are present
ansible.builtin.package:
name:
- gzip
- tar
state: present
- name: Check if Boot backup exists
ansible.builtin.stat:
path: "/root/boot-backup-{{ snapshot_revert_set_name }}.tgz"
register: snapshot_revert_boot_archive_stat
- name: Restore boot backup
ansible.builtin.unarchive:
remote_src: true
src: "{{ snapshot_revert_boot_archive_stat.stat.path }}"
dest: /boot
when: snapshot_revert_boot_archive_stat.stat.exists
- name: Revert to snapshots
ansible.builtin.command: "lvconvert --merge /dev/{{ item.vg_name }}/{{ item.lv_name }}"
loop: "{{ snapshot_revert_snapshots }}"
changed_when: false
- name: Reboot
ansible.builtin.reboot:
- name: Check if /boot is on LVM
ansible.builtin.command: "grub2-probe --target=abstraction /boot"
changed_when: false
failed_when: false
register: snapshot_revert_boot_abstraction
- name: Reinstall Grub to boot device
when: snapshot_revert_boot_abstraction.stdout == 'lvm'
block:
- name: Get boot device
ansible.builtin.shell: "lsblk -spnlo name $(grub2-probe --target=device /boot)"
changed_when: false
register: snapshot_revert_boot_dev_deps
- name: Run grub2-install
ansible.builtin.command: "grub2-install {{ snapshot_revert_boot_dev_deps.stdout_lines | last }}"
changed_when: true
- name: Remove boot backup
ansible.builtin.file:
path: "{{ snapshot_revert_boot_archive_stat.stat.path }}"
state: absent
when: snapshot_revert_boot_archive_stat.stat.exists
- name: Wait for the snapshot to drain
ansible.builtin.command: "lvs --select 'vg_name = {{ item.vg_name }} && lv_name = {{ item.origin }}' --reportformat json"
register: snapshot_revert_lv_drain_check
until: (snapshot_revert_lv_drain_check.stdout | from_json).report[0].lv[0].data_percent == ""
retries: 20
delay: 30
loop: "{{ snapshot_revert_snapshots }}"
changed_when: false

View File

@ -1,14 +0,0 @@
- name: Run lvs
ansible.builtin.command: "lvs --select 'lv_name = {{ item.lv_name }}' --reportformat json"
register: snapshot_revert_lvs_response
changed_when: false
- name: Parse report
ansible.builtin.set_fact:
snapshot_revert_lv_attr: "{{ (snapshot_revert_lvs_response.stdout | from_json).report[0].lv[0].lv_attr }}"
- name: Verify that the snapshot is active
ansible.builtin.assert:
that:
- snapshot_revert_lv_attr[0] == 's'
- snapshot_revert_lv_attr[4] == 'a'