Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Config Kraft mdoe for SASL/SCRAM authentication SASL_SSL #360

Open
glad47 opened this issue Oct 23, 2024 · 0 comments
Open

Config Kraft mdoe for SASL/SCRAM authentication SASL_SSL #360

glad47 opened this issue Oct 23, 2024 · 0 comments

Comments

@glad47
Copy link

glad47 commented Oct 23, 2024

hi, i am trying to using SCRAM with kafka on kraft mode, i found that when i set all the right configuration for SCRAM that mention in the document still kafka show an error indicating i am trying to set the kafka for gssapi , about serverName, the ssl was working fine but after adding the configuration of SASL/SCRAM the problem arises, i also have printed the /etc/kafka/kafka.properties after being processed by the /etc/confluent/docker/ensure and the configuration was prefect for SCRAM as the documentation https://docs.confluent.io/platform/current/security/authentication/sasl/scram/overview.html#auth-sasl-scram-broker-config and was exact the same but no use.

the log of the /etc/kafka/kafka.properties

inter.broker.listener.name=SASL_SSL
2024-10-23 18:50:53 ssl.keystore.filename=kafka.keystore.jks
2024-10-23 18:50:53 jmx.port=9101
2024-10-23 18:50:53 super.users=User:admin
2024-10-23 18:50:53 transaction.state.log.min.isr=2
2024-10-23 18:50:53 ssl.key.credentials=creds
2024-10-23 18:50:53 process.roles=broker,controller
2024-10-23 18:50:53 controller.listener.names=CONTROLLER
2024-10-23 18:50:53 group.initial.rebalance.delay.ms=0
2024-10-23 18:50:53 controller.quorum.voters=1@kafka-1:29093
2024-10-23 18:50:53 jmx.hostname=localhost
2024-10-23 18:50:53 node.id=1
2024-10-23 18:50:53 ssl.key.password=secret
2024-10-23 18:50:53 advertised.listeners=SASL_SSL://kafka-1:29092,SASL_SSL_HOST://localhost:9092
2024-10-23 18:50:53 sasl.enabled.mechanisms=SCRAM-SHA-512
2024-10-23 18:50:53 listener.security.protocol.map=CONTROLLER:SASL_SSL,SASL_SSL:SASL_SSL,SASL_SSL_HOST:SASL_SSL
2024-10-23 18:50:53 ssl.truststore.filename=kafka.truststore.jks
2024-10-23 18:50:53 ssl.truststore.credentials=creds
2024-10-23 18:50:53 broker.id=1
2024-10-23 18:50:53 ssl.keystore.password=secret
2024-10-23 18:50:53 transaction.state.log.replication.factor=1
2024-10-23 18:50:53 listeners=SASL_SSL://kafka-1:29092,CONTROLLER://kafka-1:29093,SASL_SSL_HOST://0.0.0.0:9092
2024-10-23 18:50:53 ssl.keystore.location=/etc/kafka/secrets/kafka.keystore.jks
2024-10-23 18:50:53 zookeeper.connect=
2024-10-23 18:50:53 sasl.mechanism.inter.broker.protocol=SCRAM-SHA-512
2024-10-23 18:50:53 ssl.endpoint.identification.algorithm=
2024-10-23 18:50:53 log.dirs=/tmp/kraft-combined-logs
2024-10-23 18:50:53 offsets.topic.replication.factor=3
2024-10-23 18:50:53 security.protocol=SASL_SSL
2024-10-23 18:50:53 ssl.client.auth=none
2024-10-23 18:50:53 ssl.keystore.credentials=creds

here is the log of the issue

2024-10-23 18:50:55 org.apache.kafka.common.KafkaException: java.lang.IllegalArgumentException: No serviceName defined in either JAAS or Kafka config
2024-10-23 18:50:55 at org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:184)
2024-10-23 18:50:55 at org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:192)
2024-10-23 18:50:55 at org.apache.kafka.common.network.ChannelBuilders.clientChannelBuilder(ChannelBuilders.java:81)
2024-10-23 18:50:55 at kafka.raft.KafkaRaftManager.buildNetworkClient(RaftManager.scala:328)
2024-10-23 18:50:55 at kafka.raft.KafkaRaftManager.buildNetworkChannel(RaftManager.scala:297)
2024-10-23 18:50:55 at kafka.raft.KafkaRaftManager.(RaftManager.scala:215)
2024-10-23 18:50:55 at kafka.server.SharedServer.start(SharedServer.scala:266)
2024-10-23 18:50:55 at kafka.server.SharedServer.startForController(SharedServer.scala:138)
2024-10-23 18:50:55 at kafka.server.ControllerServer.startup(ControllerServer.scala:206)
2024-10-23 18:50:55 at kafka.server.KafkaRaftServer.$anonfun$startup$1(KafkaRaftServer.scala:98)
2024-10-23 18:50:55 at kafka.server.KafkaRaftServer.$anonfun$startup$1$adapted(KafkaRaftServer.scala:98)
2024-10-23 18:50:55 at scala.Option.foreach(Option.scala:437)
2024-10-23 18:50:55 at kafka.server.KafkaRaftServer.startup(KafkaRaftServer.scala:98)

my docker config is as follows :

kafka-1:
image: confluentinc/cp-kafka:7.7.0
hostname: kafka-1
container_name: kafka-1
ports:
- "9092:9092"
- "9101:9101"
environment:
KAFKA_BROKER_ID: 1
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: 'CONTROLLER:SASL_SSL,SASL_SSL:SASL_SSL,SASL_SSL_HOST:SASL_SSL'
KAFKA_ADVERTISED_LISTENERS: 'SASL_SSL://kafka-1:29092,SASL_SSL_HOST://localhost:9092'
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 3
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 2
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
KAFKA_JMX_PORT: 9101
KAFKA_JMX_HOSTNAME: localhost
KAFKA_PROCESS_ROLES: 'broker,controller'
KAFKA_NODE_ID: 1
KAFKA_CONTROLLER_QUORUM_VOTERS: '1@kafka-1:29093'
KAFKA_LISTENERS: 'SASL_SSL://kafka-1:29092,CONTROLLER://kafka-1:29093,SASL_SSL_HOST://0.0.0.0:9092'
KAFKA_CONTROLLER_LISTENER_NAMES: 'CONTROLLER'
KAFKA_LOG_DIRS: '/tmp/kraft-combined-logs'
CLUSTER_ID: MkU3OEVBNTcwNTJENDM2Qk
KAFKA_SECURITY_PROTOCOL: 'SASL_SSL'
KAFKA_INTER_BROKER_LISTENER_NAME: 'SASL_SSL'
KAFKA_SASL_ENABLED_MECHANISMS: 'SCRAM-SHA-512'
KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL: 'SCRAM-SHA-512'
KAFKA_SSL_KEYSTORE_FILENAME: kafka.keystore.jks
KAFKA_SSL_KEYSTORE_CREDENTIALS: creds
KAFKA_SSL_KEY_CREDENTIALS: creds
KAFKA_SSL_TRUSTSTORE_FILENAME: kafka.truststore.jks
KAFKA_SSL_TRUSTSTORE_CREDENTIALS: creds
KAFKA_SSL_CLIENT_AUTH: none
KAFKA_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM: ''
KAFKA_OPTS: "-Djava.security.auth.login.config=/etc/kafka/secrets/kafka_server.conf"
KAFKA_SUPER_USERS: 'User:admin'
volumes:
- ./ssl:/etc/kafka/secrets
- ./ssl/update_run.sh:/tmp/update_run.sh
command: "bash -c 'if [ ! -f /tmp/update_run.sh ]; then echo "ERROR: Did you forget the update_run.sh file that came with this docker-compose.yml file?" && exit 1 ; else chmod +x /tmp/update_run.sh && /tmp/update_run.sh && /etc/confluent/docker/run ; fi'"

the update_run.sh is. as follows :

echo "" >> /etc/confluent/docker/ensure

echo "kafka-storage format --ignore-formatted --cluster-id MkU3OEVBNTcwNTJENDM2Qk --config /etc/kafka/kafka.properties --add-scram 'SCRAM-SHA-512=[name=admin,password=admin-secret'; " >> /etc/confluent/docker/ensure

Expectation

the server have to run as the documentation shows, the problem i am thinking that kraft mode does not support this but it was mentioned in the documentation it is, thank you everyone

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant