Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

libusb_alloc_transfer causing crash #411

Open
seth-teamfdi opened this issue Nov 7, 2018 · 0 comments
Open

libusb_alloc_transfer causing crash #411

seth-teamfdi opened this issue Nov 7, 2018 · 0 comments

Comments

@seth-teamfdi
Copy link

I am currently using the hidapi in a QT application built on Ubuntu 18.04. It works great, except that the application will crash seemingly at random due to a seg fault. After running some valgrind and GDB, it seems that the issue is within or related to hidapi. I found an issue where we were using hid_enumerate without properly using hid_free_enumeration, and once that was fixed it helped, but the application still crashes. Now I have had it crash a few times and ran the core dump file through GDB, this is the output of using backtrace to trace the call stack at time of crash:

#0 _int_malloc (av=av@entry=0x7f6e94000020, bytes=bytes@entry=208)
at malloc.c:3779
#1 0x00007f6eadce40b1 in __libc_calloc (n=,
elem_size=) at malloc.c:3436
#2 0x00007f6eb0a61822 in libusb_alloc_transfer ()
from /lib/x86_64-linux-gnu/libusb-1.0.so.0
#3 0x000055bdbe7a831d in read_thread ()
#4 0x00007f6eae9816db in start_thread (arg=0x7f6e9bfff700)
at pthread_create.c:463
#5 0x00007f6eadd6b88f in clone ()
at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Which seems to indicate the issue is from the libusb_alloc_transfer() function, but the only place this function is used in my project is within hid-libusb.c itself. It is used in the read_thread function, which does not appear to ever be called:

static void *read_thread(void *param)
{
	hid_device *dev = param;
	unsigned char *buf;
	const size_t length = dev->input_ep_max_packet_size;

	/* Set up the transfer object. */
	buf = malloc(length);
	dev->transfer = libusb_alloc_transfer(0);
	libusb_fill_interrupt_transfer(dev->transfer,
		dev->device_handle,
		dev->input_endpoint,
		buf,
		length,
		read_callback,
		dev,
		5000/*timeout*/);
	
	/* Make the first submission. Further submissions are made
	   from inside read_callback() */
	libusb_submit_transfer(dev->transfer);

	// Notify the main thread that the read thread is up and running.
	pthread_barrier_wait(&dev->barrier);
	
	/* Handle all the events. */
	while (!dev->shutdown_thread) {
		int res;
		res = libusb_handle_events(NULL);
		if (res < 0) {
			/* There was an error. Break out of this loop. */
			break;
		}
	}
	
	/* Cancel any transfer that may be pending. This call will fail
	   if no transfers are pending, but that's OK. */
	if (libusb_cancel_transfer(dev->transfer) == 0) {
		/* The transfer was cancelled, so wait for its completion. */
		libusb_handle_events(NULL);
	}
	
	/* Now that the read thread is stopping, Wake any threads which are
	   waiting on data (in hid_read_timeout()). Do this under a mutex to
	   make sure that a thread which is about to go to sleep waiting on
	   the condition acutally will go to sleep before the condition is
	   signaled. */
	pthread_mutex_lock(&dev->mutex);
	pthread_cond_broadcast(&dev->condition);
	pthread_mutex_unlock(&dev->mutex);

	/* The dev->transfer->buffer and dev->transfer objects are cleaned up
	   in hid_close(). They are not cleaned up here because this thread
	   could end either due to a disconnect or due to a user
	   call to hid_close(). In both cases the objects can be safely
	   cleaned up after the call to pthread_join() (in hid_close()), but
	   since hid_close() calls libusb_cancel_transfer(), on these objects,
	   they can not be cleaned up here. */
	
	return NULL;
}

It does appear to be mentioned in some comments within hid_close, so perhaps it is used behind the scenes for hid_open and hid_close? All of our calls to hid_open match with a hid_close though. Any ideas what may be causing a segmentation fault here?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant