2

I'm using helm to install vernemq on my kubernetes cluster

The problems is it can't start, I accepted the EULA

Here is the log:

02:31:56.552 [error] CRASH REPORT Process <0.195.0> with 0 neighbours exited with reason: {{{badmatch,{error,{vmq_generic_msg_store,{bad_return,{{vmq_generic_msg_store_app,start,[normal,[]]},{'EXIT',{{badmatch,{error,{{undef,[{eleveldb,validate_options,[open,[{block_cache_threshold,33554432},{block_restart_interval,16},{block_size_steps,16},{compression,true},{create_if_missing,true},{data,[{dir,"./data"}]},{data_root,"./data/leveldb"},{delete_threshold,1000},{eleveldb_threads,71},{fadvise_willneed,false},{limited_developer_mem,false},{sst_block_size,4096},{store_dir,"./data..."},...]],...},...]},...}}},...}}}}}}},...},...} in application_master:init/4 line 138
02:31:56.552 [info] Application vmq_server exited with reason: {{{badmatch,{error,{vmq_generic_msg_store,{bad_return,{{vmq_generic_msg_store_app,start,[normal,[]]},{'EXIT',{{badmatch,{error,{{undef,[{eleveldb,validate_options,[open,[{block_cache_threshold,33554432},{block_restart_interval,16},{block_size_steps,16},{compression,true},{create_if_missing,true},{data,[{dir,"./data"}]},{data_root,"./data/leveldb"},{delete_threshold,1000},{eleveldb_threads,71},{fadvise_willneed,false},{limited_developer_mem,false},{sst_block_size,4096},{store_dir,"./data..."},...]],...},...]},...}}},...}}}}}}},...},...}
Kernel pid terminated (application_controller) ({application_start_failure,vmq_server,{bad_return,{{vmq_server_app,start,[normal,[]]},{'EXIT',{{{badmatch,{error,{vmq_generic_msg_store,{bad_return,{{vm

{"Kernel pid terminated",application_controller,"{application_start_failure,vmq_server,{bad_return,{{vmq_server_app,start,[normal,[]]},{'EXIT',{{{badmatch,{error,{vmq_generic_msg_store,{bad_return,{{vmq_generic_msg_store_app,start,[normal,[]]},{'EXIT',{{badmatch,{error,{{undef,[{eleveldb,validate_options,[open,[{block_cache_threshold,33554432},{block_restart_interval,16},{block_size_steps,16},{compression,true},{create_if_missing,true},{data,[{dir,\"./data\"}]},{data_root,\"./data/leveldb\"},{delete_threshold,1000},{eleveldb_threads,71},{fadvise_willneed,false},{limited_developer_mem,false},{sst_block_size,4096},{store_dir,\"./data/msgstore\"},{sync,false},{tiered_slow_level,0},{total_leveldb_mem_percent,70},{use_bloomfilter,true},{verify_checksums,true},{verify_compaction,true},{write_buffer_size,41777529},{write_buffer_size_max,62914560},{write_buffer_size_min,31457280}]],[]},{vmq_storage_engine_leveldb,init_state,2,[{file,\"/opt/vernemq/apps/vmq_generic_msg_store/src/engines/vmq_storage_engine_leveldb.erl\"},{line,99}]},{vmq_storage_engine_leveldb,open,2,[{file,\"/opt/vernemq/apps/vmq_generic_msg_store/src/engines/vmq_storage_engine_leveldb.erl\"},{line,39}]},{vmq_generic_msg_store,init,1,[{file,\"/opt/vernemq/apps/vmq_generic_msg_store/src/vmq_generic_msg_store.erl\"},{line,181}]},{gen_server,init_it,2,[{file,\"gen_server.erl\"},{line,374}]},{gen_server,init_it,6,[{file,\"gen_server.erl\"},{line,342}]},{proc_lib,init_p_do_apply,3,[{file,\"proc_lib.erl\"},{line,249}]}]},{child,undefined,{vmq_generic_msg_store_bucket,1},{vmq_generic_msg_store,start_link,[1]},permanent,5000,worker,[vmq_generic_msg_store]}}}},[{vmq_generic_msg_store_sup,'-start_link/0-lc$^0/1-0-',2,[{file,\"/opt/vernemq/apps/vmq_generic_msg_store/src/vmq_generic_msg_store_sup.erl\"},{line,40}]},{vmq_generic_msg_store_sup,start_link,0,[{file,\"/opt/vernemq/apps/vmq_generic_msg_store/src/vmq_generic_msg_store_sup.erl\"},{line,42}]},{application_master,start_it_old,4,[{file,\"application_master.erl\"},{line,277}]}]}}}}}}},[{vmq_plugin_mgr,start_plugin,1,[{file,\"/opt/vernemq/apps/vmq_plugin/src/vmq_plugin_mgr.erl\"},{line,524}]},{vmq_plugin_mgr,start_plugins,1,[{file,\"/opt/vernemq/apps/vmq_plugin/src/vmq_plugin_mgr.erl\"},{line,503}]},{vmq_plugin_mgr,check_updated_plugins,2,[{file,\"/opt/vernemq/apps/vmq_plugin/src/vmq_plugin_mgr.erl\"},{line,444}]},{vmq_plugin_mgr,handle_plugin_call,2,[{file,\"/opt/vernemq/apps/vmq_plugin/src/vmq_plugin_mgr.erl\"},{line,246}]},{gen_server,try_handle_call,4,[{file,\"gen_server.erl\"},{line,661}]},{gen_server,handle_msg,6,[{file,\"gen_server.erl\"},{line,690}]},{proc_lib,init_p_do_apply,3,[{file,\"proc_lib.erl\"},{line,249}]}]},{gen_server,call,[vmq_plugin_mgr,{enable_system_plugin,vmq_generic_msg_store,[internal]},infinity]}}}}}}"}
Crash dump is being written to: /erl_crash.dump...

So where my problems at, I just use helm install vernemq vernemq/vernemq to install it.

1 Answers1

1

I reproduced your issue and fixed it by using latest docker image. While installing chart it uses 1.10.2-alpine.

You can change it by pulling helm chart:

helm fetch --untar vernemq/vernemq

Then change directory to vernemq and edit values.yaml:

image:
  repository: vernemq/vernemq
  tag: latest

Save changes and install the chart, by using for example:

helm install vernemq .

When the chart is installed you can check the VerneMQ cluster status by using:

kubectl exec --namespace default vernemq-vernemq-0 /vernemq/bin/vmq-admin cluster show

and output should be similar to this:

+----------------------------------------------------------------+-------+
|                              Node                              |Running|
+----------------------------------------------------------------+-------+
|VerneMQ@v-vernemq-0.v-vernemq-headless.default.svc.cluster.local| true  |
+----------------------------------------------------------------+-------+
kool
  • 190
  • 6