-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Stray locks not being cleaned: server replied: Locked (ajax cron) #20380
Comments
cc @icewind1991 for the locking topic |
I believe the error message in the client says "download" even when uploading, it's another issue. The question here is why the file is locked in the first place. Are there other users accessing that folder ? I suspect a stray lock. |
It's possible that the files to upload were in use by another program when the sync-client tried to upload them for the first time. |
I have exact the same problem. It suddenly occured for one file and for the first time. I'm the only one who is syncing to this directory (3 PCs, 2 mobile devices). I can not overwrite or delete it.
Server configurationOperating system:
The content of config/config.php:
Error message from logfile:
|
Server configuration Operating system:
The content of config/config.php: $CONFIG = array ( |
Do you also get files locked errors when trying to upload trough the web interface? |
Yes |
i had the same problem. My workaround : Dirty but solved the problem ... for now |
I've found some additional files which can not be deleted because they are locked. If you need additional debug data let me know.... |
Are there any errors in the logs before the locking error shows up? |
I see no other errors before the locking error. It occures just in the moment I want to modify or delete a file. Here is my owncloud.log |
I do have the same problem with a fresh installation of 8.2 |
This is happening to me (on both 8.2 and 8.2.1, with MySQL), particularly (I think) since I added Dropbox external storage to one of my users (another user already had Dropbox set up previously with no problems). Possibly of note: I just tried cleaning things up, by turning on maintenance mode, deleting everything from |
For performance reasons (since 8.2.1) rows are not cleaned up directly but re-used in further requests |
Fair enough, so that's probably not related to the issue, then. For what it's worth, I've removed the Dropbox external storage from this particular user, and haven't had any file locking problems so far since then. That may be coincidence, of course, or just that the particular files being synched with the Dropbox folder were the ones likely to cause the locking issue. |
All of our s3 files are locked. We cannot delete or rename any files that were there previous to 8.2 update. |
Same on OC v8.2.1 with TFL and Memcaching via Redis as recommended. Anyway, there are a few entries in oc_file_locks (although through using Redis there shouldn´t be any locks?). No idea how to fix this. Only one specific file affected, making me and the never-ending, logfile-filling desktop clients going crazy. Thankful for every tip or workaround! No idea how to "unlock" the file... |
@icewind1991 are you able to reproduce this issue ? For DB-based locking it might be possible to remove the locks by cleaning the "oc_file_locks" table. |
Are you guys using php-fpm ? I suspect that if the PHP process gets killed due to timeouts, the locks might not get cleared properly. However I thought that locks now have a TTL, @icewind1991 ? |
Yes, php-fpm is in the game too. @PVince81 perfect! That was what I was looking for (at http://redis.io/commands). For the moment syncing works fine again. Do you know the cli for listing all keys/locked files on redis-cli too? And I still don´t get why oc_file_locks has entries although using redis... |
I've been experiencing the same issue. Operating system: Ubuntu 14.04.3 LTS After entering on Maintenance Mode, I have seen that the table oc_file_locks has lost of entries with lock > 0 (even > 10) and about 150 entries with a future ttl value. Solved by deleting all rows and leaving the maintenance mode. |
Same issue here. all-inkl.com shared hosting Flushing oc_file_locks resolves all issues. |
I was hit by this bug too. My system: PHP 5.6.14 The flushing of oc_file_locks seams to fix this issue indeed. So I wrote a little script to remove all the stale locks from the file_locks table: #!/usr/bin/env bash
##########
# CONFIG #
##########
# CentOS 6: /usr/bin/mysql
# FreeBSD: /usr/local/bin/mysql
mysqlbin='/usr/local/bin/mysql'
# The location where OwnCloud is installed
ownclouddir='/var/www/owncloud'
#################
# ACTUAL SCRIPT #
#################
dbhost=$(grep dbhost "${ownclouddir}/config/config.php" | cut -d "'" -f 4)
dbuser=$(grep dbname "${ownclouddir}/config/config.php" | cut -d "'" -f 4)
dbpass=$(grep dbpassword "${ownclouddir}/config/config.php" | cut -d "'" -f 4)
dbname=$(grep dbname "${ownclouddir}/config/config.php" | cut -d "'" -f 4)
dbprefix=$(grep dbtableprefix "${ownclouddir}/config/config.php" | cut -d "'" -f 4)
"${mysqlbin}" --silent --host="${dbhost}" --user="${dbuser}" --password="${dbpass}" --execute="DELETE FROM ${dbprefix}file_locks WHERE ttl < UNIX_TIMESTAMP();" "${dbname}" Just configure where the mysql command can be found (hint: And of course you can run this script as a cronjob every night, so you don't have to think about these stale locks anymore. Hopefully this workaround script is useful for someone else except just me :) |
Hi, recently I had the same problem (Using the database as locking system). As I read the post of @PVince81 here, the "ttl" was introduced for removing old or stray locks? Well, I tested the expire mechanism and it seems not to work as expected.
In the last case I would expect the file can be renamed successfully. But the file lock is respected although it is expired. By looking into the code of the So I wonder, if this is the only purpose of the ttl: Only to clean up valid old and fully released locks? But in any case, it seems to be useful to introduce a timestamp like the ttl, which is regarded when a lock should be acquired. For example let's call this timestamp "stray_timeout"
Well, hope these thoughts are not totally nonsense and may help ;-) ownCloud version: 8.2.1 (stable) |
@icewind1991 can you have a look why the expiration is not working ? |
I cleaned the oc_file_locks table, it is empty, but I still get error 423 with certain files:
I am running redis as a memcache. Cron.php is run every 15 minutes by a crontab entry, in the webinterface it says it was run a few minutes ago. |
@e-alfred are you using redis for locking too ? If yes, then oc_file_locks is not used but redis. You might want to clear the redis cache too then. |
Yes, Redis for both caching and locking. I flushed the Redis cache and will see what happens. |
@PVince81 The problem still prevails, interestingly only for one user with synced hidden files (Git repositories and Eclipse configuration) I am getting a 423 response for certain files. Here are two examples:
|
Okay, I run an |
|
We have a similar problem on a small auxiliary installation running owncloud 9.1.3. Is this issue understood already and in the pipeline for fixing? Files are sometimes locked and neither cleaning up the oc_file_locks table occ files:scan --all solves the problem. In this particular server the crons were not run for a long time due to misconfiguration. We corrected that and run the cron job few times by hand while trying to resolve the problem. It did not help. The TTL entries in oc_file_locks are set to some insanely high values. What should they be normally? 3600s? The problem appears for a folder which is "External Mount" pointing to the local disk on the same server and then shared with a user by administrator. Transactional File Locking is not enabled -- should it? Here are the server error messages.
|
@moscicki this issue here is about people using ajax cron where ajax cron doesn't run often enough to trigger the oc_filelock cleaning background job. If you say that even clearing that table doesn't solve the problem, then it's a problem that has not been reproduced and understood yet. The TTL is set to 3600 here https://github.com/owncloud/core/blob/v9.1.3/lib/private/Lock/DBLockingProvider.php#L100 in the default value. From my understand it's not that the lock isn't cleared when clearing the table. The problem is that the lock reappears after clearing and stays there. The posted exception is about an upload (Webdav PUT). Usually unlocking the lock is triggered after the See #22370 (comment) for possible fixes. If no connection abortion or timeouts were involved, then the problem might be somewhere else. |
I do have the same problem since today. I already cleared the database tables
but when I try to move a file to another directory using the web interface I get the lock error message and see this in my log
Repeating the move procedure results in the same error, no moving possible. |
We also had multiple folders locked for multiple users, which couldn't be used, or deleted... This was in centos 6 with cpanel. The only thing that worked for us was configuring redis as explained here and here. |
Closing in favor of a more generic ticket about ajax cron: #27574 Please discuss possible approaches there |
Should this issue really be closed in favor of #27574 ? |
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |
Hi there,
I have problems uploading some files via windows 7 client (Version 2.0.2, build 5569) connected with an owncloud 8.2 stable-Server.
The files exist on the client, not on the server. The log file on the client says:
I wonder why the client has problems to download - it should try to upload.
At first I thougt that the file on the client could be in use by another program. But the server says, that the file ist locked, not the client.
Can anyone help me please?
Regards,
klausguenter
The text was updated successfully, but these errors were encountered: