-
Notifications
You must be signed in to change notification settings - Fork 62
Open
Description
The nvmeof service continues to fail. I tweaked multiple properties in the service file but without success. Even with the standard commands the service doesn't start. Could someone have a look at the logs and tell me in which direction I need to look for the issue?
Service file:
---
service_type: nvmeof
service_name: nvmeof.gateway-group
placement:
count: 2
hosts:
- ctrl01.mydomain.com
- ctrl02.mydomain.com
# - ctrl03.mydomain.com
networks:
- 10.0.0.0/24
- 10.1.0.0/22
- 10.2.0.0/22
abort_on_errors: false
spec:
pool: nvme-pool
group: gateway-group
addr_map:
ctrl01.mydomain.com: 10.0.0.85
ctrl02.mydomain.com: 10.0.0.131
ctrl03.mydomain.com: 10.0.0.47
discovery_addr_map:
ctrl01.mydomain.com: 10.1.1.101
ctrl02.mydomain.com: 10.1.1.102
ctrl03.mydomain.com: 10.1.1.103
log_level: DEBUG
abort_on_errors: false #<--(doesn't seem to work)
spdk_protocol_log_level: DEBUG
# enable_monitor_client: false
Systemd logs
nvmeof.gateway-group.ctrl01-anon.log
Setup in short:
- Ceph version: 20.2.0
- NVMe-OF version: 1.5.16
- Nodes:
- 7 nodes total (4 data OSD + 3 ctrl/nvmeof-gw)
- Network:
- 10.0.0.0/24 mgmt network
- 10.1.0.0/22 public network
- 10.2.0.0/22 cluster network
Context:
The Ceph cluster is installed as a POC to test if Ceph is a viable storage alternative for our environment. We need the NVMe-oF gateway to be able to connect our VMware environment. Any thoughts or ideas? Feel free to share as well.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels
Type
Projects
Status
🆕 New