-
Notifications
You must be signed in to change notification settings - Fork 5.2k
Ceph Kernel Module Missing. #2916
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
@ajh34 We do not enable CEPH in our kernel builds. It seems like a fairly niche system, and unless its impact on memory and CPU bandwidth is minimal (close to zero), we would be unlikely to incorporate it by default. If you can provide those figures, it would give us something to consider. It seems likely that as a module it impact would indeed be minimal, but we would need to be sure. |
Unexpectedly ran into this same issue. I would never have expected a modern kernel didn't have Cpeh included. The performance impact of this as a module should be extremely small when it's not loaded and laoding of the module will only occur when a user actually uses it to mount a filesystem. There is a Ceph-FUSE option, but that is much lower performing that the kernel module. |
@pelwell @popcornmix Any thoughts? |
As a filesystem I would expect no impact on performance if it isn't used, but there could be some overhead on the static (non-module) kernel size. The usual expectation on somebody asking for a feature to be added is for them to do a trial build and show the size of any additional modules, and the size of the kernel .img and free memory when it is used, along with the same information for a kernel that differs only in that the feature is not enabled. |
I built from fe915de, following the instructions here: https://www.raspberrypi.org/documentation/linux/kernel/building.md The resultant image: The kernel previously: (Different hostnames, but the nodes were updated at the same time.) The resultant kernel images: Of course, this is not an apples-to-apples comparison: If I get a chance I'll run a build from the same point that the released kernel was built from. Changes to the kernel configuration:
|
Using the modified config and building from the tag raspberrypi-kernel_1.20200601-1: -rwxr-xr-x 1 root root 5776216 Jul 9 21:50 kernel-ceph.img Linux cluster0 4.19.118-v7l-ceph+ #2 SMP Thu Jul 9 21:45:01 BST 2020 armv7l GNU/Linux And this is the official build again for comparison: -rwxr-xr-x 1 root root 5801056 Jul 6 17:13 kernel7l.img For clarification, the configuration was modified by explicitly setting:
and then choosing the default options for the other options that the build prompts for. |
Sorry, I forgot to include the sizes of the modules: -rw-r--r-- 1 root root 299404 Jul 9 21:50 /lib/modules/4.19.118-v7l-ceph+/kernel/fs/ceph/ceph.ko |
Please explain why we should tax all Pi users 4 * 600kB = 2.4MB for this feature. |
I'm not saying you should :-) I happened to be building the Ceph modules to try them out, and noticed this issue with the last comment being a request for the information. I do notice, however, that the change in disk space for the kernel modules under /lib/modules is an increase of 0.14% between the supplied v7l directory and my build. |
Interestingly, I notice that there's just shy of 8MiB of kernel modules for OCFS2 (https://en.wikipedia.org/wiki/OCFS2) support in the current Raspberry Pi OS image. Would that be because of this: https://blogs.oracle.com/developers/building-the-world%e2%80%99s-largest-raspberry-pi-cluster |
Add to that another 2MiB for GFS2 (https://en.wikipedia.org/wiki/GFS2) support and 2MiB plus change for DRBD (https://en.wikipedia.org/wiki/Distributed_Replicated_Block_Device) support. At this point it is looking a bit strange that the Ceph modules are not included. |
I also ran into this issue trying to set up a local ceph cluster in k3s. As I do not know too much about kernel modules: is there a way that you compile the module, but do not load it by default until a user opts in to load it? |
Yes, I think the support of the ceph is extremely necessary. Considering current trends - many people want to try production-ready K8S with Ceph distributed storage at home. Many companies use this technology stack, I conduct experiments on the implementation of the latest technologies on my raspberry cluster, in order to then apply this experience in real infrastructure at work. @pelwell, rebuilding the kernel is tedious every time, especially since not everyone is ready to dive into it. IMHO, 2.4MiB is not such a price that it somehow affects ordinary users... |
Just to let you know: missing out on this lead me to switch to pure debian images as my base image. Setting up ceph in this setup was a breeze. |
Of course one can go back to using the upstream kernel (or a distro that used it). But my usage case is one where I build a K8s cluster on RPI4 that are POE powered. The POE hat requires the downstream kernel, or the fans do not spin up (and then the systems overheat), however running Rook/Ceph on top of k8s rquires RBD... |
Any news on this ? The missing Ceph support forbids the usage of Rook on K3S. |
What's the holdup on this RBD and CephFS are used widely with K3s and other K8s solutions. At the moment I need to either install a non-raspbian distro or rebuild the kernel to get ceph support for k8s. This is silly ESPECIALLY when you have other much larger modules enabled for other "show boat" usages. Also, on what planet is 2MB an issue when you recommend a minimum of 8GB for the SD card... Moreover you can always delete modules after install if the 2MB is so critical. Or, and here's a wild idea, package up kernel modules as their own package for ceph, OCFS2, GFS2, and DRBD. So build em, package them separately. Because @pelwell I want my 10+MB back from OCFS2, GFS2, and DRBD. I don't know why we tax every RPi install 12MB for stuff most people arern't going to use. I mean who in their right mind would use RPi's for DRBD? |
If I remember correct, there was the option to install additional kernel-modules with Also, with the turing-pi having achieved >7000 bakers, I assume there will be a significant demand for this feature coming in later this year. |
Add support for the CEPH distributed filesystem. See: #2916 Signed-off-by: Phil Elwell <[email protected]>
Add support for the CEPH distributed filesystem. See: #2916 Signed-off-by: Phil Elwell <[email protected]>
Add support for the CEPH distributed filesystem. See: #2916 Signed-off-by: Phil Elwell <[email protected]>
The bad news is that since the last size measurement (on 4.19), the module size has increased:
This takes the overhead to ~4*750kB = ~3MB. However, the good news is that a few months ago we enabled module compression, so when installed the sizes drop considerably:
I think this is acceptable for a (currently) niche feature, therefore the option has been added to all current kernel branches and will be in future releases. |
kernel: dtoverlays: Add nohdmi options to vc4-kms-v3d overlays raspberrypi/linux#5099 kernel: overlays: Make more overlays runtime-capable See: raspberrypi/linux#5101 kernel: overlays: Mark more overlays as Pi4-specific kernel: Revert ext4: make mb_optimize_scan performance mount option work with extents See: raspberrypi/linux#5097 kernel: configs: Enable IIO software trigger modules See: raspberrypi/linux#4984 kernel: configs: Enable IP_VS_IPV6 (for loadbalancing) See: raspberrypi/linux#2860 kernel: configs: Enable CEPH_FS=m See: raspberrypi/linux#2916 firmware: arm_loader: initramfs over NVME fix See: #1731
kernel: dtoverlays: Add nohdmi options to vc4-kms-v3d overlays raspberrypi/linux#5099 kernel: overlays: Make more overlays runtime-capable See: raspberrypi/linux#5101 kernel: overlays: Mark more overlays as Pi4-specific kernel: Revert ext4: make mb_optimize_scan performance mount option work with extents See: raspberrypi/linux#5097 kernel: configs: Enable IIO software trigger modules See: raspberrypi/linux#4984 kernel: configs: Enable IP_VS_IPV6 (for loadbalancing) See: raspberrypi/linux#2860 kernel: configs: Enable CEPH_FS=m See: raspberrypi/linux#2916 firmware: arm_loader: initramfs over NVME fix See: raspberrypi/firmware#1731
Add support for the CEPH distributed filesystem. See: #2916 Signed-off-by: Phil Elwell <[email protected]>
Add support for the CEPH distributed filesystem. See: #2916 Signed-off-by: Phil Elwell <[email protected]>
Add support for the CEPH distributed filesystem. See: #2916 Signed-off-by: Phil Elwell <[email protected]>
Add support for the CEPH distributed filesystem. See: raspberrypi#2916 Signed-off-by: Phil Elwell <[email protected]>
Add support for the CEPH distributed filesystem. See: #2916 Signed-off-by: Phil Elwell <[email protected]>
Add support for the CEPH distributed filesystem. See: #2916 Signed-off-by: Phil Elwell <[email protected]>
Add support for the CEPH distributed filesystem. See: #2916 Signed-off-by: Phil Elwell <[email protected]>
Can rbd be added as well? I think almost anywhere that ceph is wanted, rbd will be wanted as well, including the rook/ceph stuff in kubernetes. CONFIG_BLK_DEV_RBD=m
|
Will this change include |
Don't add further requests onto existing issues - they will be ignored. |
modprobe: FATAL: Module ceph not found in directory /lib/modules/4.14.98-v7+
Looks like the Ceph package does not install the required kernel module to mount cephfs filesystems. Same issue on stretch and previous release.
The text was updated successfully, but these errors were encountered: