0

What is the issue with lsblk command in some servers displaying full details and it is not others?

Examples: Server 1

 ~]$ lsblk
NAME             MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                8:0    0 745.2G  0 disk 
├─sda1             8:1    0   600M  0 part /boot/efi
├─sda2             8:2    0     1G  0 part /boot
└─sda3             8:3    0   743G  0 part 
  ├─rhel-root    253:0    0   200G  0 lvm  /
  ├─rhel-swap    253:1    0    16G  0 lvm  [SWAP]
  └─rhel-usr_opt 253:2    0   527G  0 lvm  /usr/opt

Server 2 has the same infrastructure given all the details.

 ~]$ lsblk
NAME                           MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda                              8:0    0 745.2G  0 disk  
├─sda1                           8:1    0   600M  0 part  /boot/efi
├─sda2                           8:2    0     1G  0 part  /boot
└─sda3                           8:3    0   743G  0 part  
  ├─rhel_mbs-root    253:0    0   200G  0 lvm   /
  ├─rhel_mbs-swap    253:1    0    32G  0 lvm   [SWAP]
  └─rhel_mbs-usr_opt 253:2    0   511G  0 lvm   /usr/opt
sdb                              8:16   0 833.5G  0 disk  
└─mpatha                       253:3    0 833.5G  0 mpath 
  └─mpatha1                    253:4    0 833.5G  0 part  
sdc                              8:32   0 833.5G  0 disk  
└─mpatha                       253:3    0 833.5G  0 mpath 
  └─mpatha1                    253:4    0 833.5G  0 part  
sdd                              8:48   0 833.5G  0 disk  
└─mpatha                       253:3    0 833.5G  0 mpath 
  └─mpatha1                    253:4    0 833.5G  0 part  
sde                              8:64   0 833.5G  0 disk  
└─mpatha                       253:3    0 833.5G  0 mpath 
  └─mpatha1                    253:4    0 833.5G  0 part  
sdf                              8:80   0 833.5G  0 disk  
└─mpatha                       253:3    0 833.5G  0 mpath 
  └─mpatha1                    253:4    0 833.5G  0 part  
sdg                              8:96   0 833.5G  0 disk  
└─mpatha                       253:3    0 833.5G  0 mpath 
  └─mpatha1                    253:4    0 833.5G  0 part  
sdh                              8:112  0 833.5G  0 disk  
└─mpatha                       253:3    0 833.5G  0 mpath 
  └─mpatha1                    253:4    0 833.5G  0 part  
sdi                              8:128  0 833.5G  0 disk  
└─mpatha                       253:3    0 833.5G  0 mpath 
  └─mpatha1                    253:4    0 833.5G  0 part  
dr_
  • 29,602
Maan
  • 21

3 Answers3

2

There is nothing wrong with lsblk. The two servers do not seem to have the same infrastructure.

Server 2 has 8 additional disks (from /dev/sdb to /dev/sdi) which are missing from Server 1. Either these disks are not installed, or are not yet recognized by the kernel. You need to run partprobe or reboot the server.

dr_
  • 29,602
1

Multipathing with 8 paths for the same LUN suggests some enterprise-grade SAN connection: perhaps a FibreChannel switched fabric?

If the LUN has been presented to Server 1 while the OS was already running, you might need to do something like this to tell the FC host adapters to accept the newly presented LUN paths:

for i in /sys/class/fc_host/host*; do echo "- - -" > /sys/class/scsi_host/${i##*/}/scan; done

In some cases, you might need to even tell the FC host adapters to perform a full reset of the FibreChannel link, particularly if the SAN administrator has made some major configuration changes to the storage system:

for i in /sys/class/fc_host/host*; do echo "1" > $i/issue_lip; done

If the missing sd* devices won't appear after these commands, you might want to double-check the WWNs of each FC port (for i in /sys/class/fc_host/host*; do printf "${i##*/}: "; cat $i/port_name; done) and verify that each FC cable is plugged in the correct port.

It is common to arrange the FibreChannel fabric into two separate halves for fault tolerance, and a FC-connected host will have (at least) two FC adapters, one for each half. If you get the cables crossed (i.e. an adapter that the storage expects to see in fabric A is connected to fabric B and vice versa), you will see no LUNs at all.

If the sd* devices will appear after rescanning/resetting the adapters (as described above), but the mpath* devices won't, you might need to install device-mapper-multipath and/or run mpathconf --enable.

telcoM
  • 96,466
-1

The differences in output of the lsblk, between the two servers are the differences in their configurations or hardware setup.

There seems to be Multipathing on the second one.

Multipathing is a technique used in storage area networks (SANs) to provide redundancy and improve performance by using multiple physical paths between a server and its storage devices.

Multipathing is the technique of creating more than one physical path between the server and its storage devices. It results in better fault tolerance and performance enhancement.

The disks from sdb - sdi seems to have all a single partition mpatha1 mounted, and they are all configured as multipath devices mpatha

Either these are included/mounted/binded at the startup or you have to do it manually for server 1, check the difference from server1 and server2 for that or where the error is for server 1.

Useful options for lsblk and blkid:

Display information about file systems

lsblk --fs

Display information about name, mountpoint and the uuid

lsblk -o +name,mountpoint,uuid

List disks by ..:

ls -l /dev/disk/by-id/

ls -l /dev/disk/by-label/

ls -l /dev/disk/by-partuuid/

ls -l /dev/disk/by-path/

ls -l /dev/disk/by-uuid/

Get your block device attributes with

blkid -o list

Managing Multipath I/O for Devices

Device mapper multipathing - introduction

What is Multipathing(Oracle)?

Z0OM
  • 3,149