Grow a GFS2 Partition
Alteeve Wiki :: How To :: Grow a GFS2 Partition |
This is a brief tutorial on growing a GFS2 partition that exists on an LVM LV backed by a DRBD resource in a Two Node Fedora 13 Cluster.
Growing a GFS2 Partition
To grow a GFS2 partition, you must know where it is mounted. You can not grow an unmounted GFS2 partition, as odd as that may seem at first. Also, you only need to run grow commands from one node. Once completed, all nodes will see and use the new free space automatically.
This requires two steps to complete:
- Extend the underlying LVM logical volume
- Grow the actual GFS2 partition
Extend the LVM LV
To keep things simple, we'll just use some of the free space we left on our /dev/drbd0 LVM physical volume. If you need to add more storage to your LVM first, please follow the instructions in the article: "Adding Space to an LVM" before proceeding.
Let's add 50GB to our GFS2 logical volume /dev/drbd_vg0/xen_store from the /dev/drbd0 physical volume, which we know is available because we left more than that back when we first setup our LVM. To actually add the space, we need to use the lvextend command:
lvextend -L +50G /dev/drbd_vg0/xen_store /dev/drbd0
Which should return:
Extending logical volume xen_store to 70.00 GB
Logical volume xen_store successfully resized
If we run lvdisplay /dev/drbd_vg0/xen_store now, we should see the extra space.
--- Logical volume ---
LV Name /dev/drbd_vg0/xen_store
VG Name drbd_vg0
LV UUID svJx35-KDXK-ojD2-UDAA-Ah9t-UgUl-ijekhf
LV Write Access read/write
LV Status available
# open 1
LV Size 70.00 GB
Current LE 17920
Segments 2
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:3
You're now ready to proceed.
Grow The GFS2 Partition
This step is pretty simple, but you need to enter the commands exactly. Also, you'll want to do a dry-run first and address any resulting errors before issuing the final gfs2_grow command.
To get the exact name to use when calling gfs2_grow, run the following command:
gfs2_tool df
/xen_store:
SB lock proto = "lock_dlm"
SB lock table = "an-cluster:xen_store"
SB ondisk format = 1801
SB multihost format = 1900
Block size = 4096
Journals = 2
Resource Groups = 80
Mounted lock proto = "lock_dlm"
Mounted lock table = "an-cluster:xen_store"
Mounted host data = "jid=1:id=196610:first=0"
Journal number = 1
Lock module flags = 0
Local flocks = FALSE
Local caching = FALSE
Type Total Blocks Used Blocks Free Blocks use%
------------------------------------------------------------------------
data 5242304 1773818 3468486 34%
inodes 3468580 94 3468486 0%
From this output, we know that GFS2 expects the name "/xen_store". Even adding something as simple as a trailing slash will not work. The program we will use is called gfs2_grow with the -T switch to run the command as a test to work out possible errors.
For example, if you added the trailing slash, this is the kind of error you would see:
Bad command:
gfs2_grow -T /xen_store/
GFS Filesystem /xen_store/ not found
Once we get it right, it will look like this:
gfs2_grow -T /xen_store
(Test mode--File system will not be changed)
FS: Mount Point: /xen_store
FS: Device: /dev/mapper/drbd_vg0-xen_store
FS: Size: 5242878 (0x4ffffe)
FS: RG size: 65535 (0xffff)
DEV: Size: 18350080 (0x1180000)
The file system grew by 51200MB.
gfs2_grow complete.
This looks good! We're now ready to re-run the command without the -T switch:
gfs2_grow /xen_store
FS: Mount Point: /xen_store
FS: Device: /dev/mapper/drbd_vg0-xen_store
FS: Size: 5242878 (0x4ffffe)
FS: RG size: 65535 (0xffff)
DEV: Size: 18350080 (0x1180000)
The file system grew by 51200MB.
gfs2_grow complete.
You can check that the new space is available on both nodes now using a simple call like df -h.
Any questions, feedback, advice, complaints or meanderings are welcome. | |||
Alteeve's Niche! | Alteeve Enterprise Support | Community Support | |
© 2025 Alteeve. Intelligent Availability® is a registered trademark of Alteeve's Niche! Inc. 1997-2025 | |||
legal stuff: All info is provided "As-Is". Do not use anything here unless you are willing and able to take responsibility for your own actions. |