Prevent CUPS to pause on error

Your problem could be tackled in different ways, depending on the version of CUPS you’re running.

More recent versions of CUPS come with a builtin functionality that could help here. It’s called “ErrorPolicy”. It’s default setting is selected in cupsd.conf, and determines how cupsd should handle print queues which do not behave as expected. You have 3 choices to tag to each queue individually:

ErrorPolicy abort-job
ErrorPolicy retry-job
ErrorPolicy retry-this-job
ErrorPolicy stop-printer

abort-job
— Abort this job and proceed with next job in same queue
retry-job
— Retry this job after waiting for N seconds (where N is determined by cupsd.conf’s “JobRetryInterval” directive).
retry-this-job
— Retry current job immediately and indefinitely.
stop-printer
— Stop current print queue and keep the job for future printing. This is still the default, unless you define otherwise as per above mentioned alternatives It also was default + only possible behaviour for all queues in previous versions of CUPS (the one you do want to get rid of as per your question).

Additionally, you can set individual ErrorPolicies to each separate print queue. This setting would be noted in the printers.conf file. (Set it from a commandline with lpadmin -p printername -o printer-error-policy=retry-this-job).

For older versions of CUPS I’d recommend to have a look at beh, the CUPS BackEnd Handler. beh is a wrapper which can be applied to any CUPS backend.

Assuming your print queue currently has defined a backend of socket://192.168.1.111:9100(external link), and it behaves in the way you don’t like (being disabled by cupsd from time to time due to network connection problems). With beh you’d re-define your backend like this:

beh:/0/20/120/socket://192.168.1.111:9100

This would retry a job 20 times in two minute intervals, and disable the queue only when still not succeeding. Or you could do this:

beh:/1/3/5/socket://192.168.1.111:9100

This retries the job 3 times with 5 second delays between the attempts. If the job still fails, it is discarded, but the queue is not disabled. You want to let cupsd try indefinitely to connect to the device? Good, try this:

beh:/1/0/30/socket://192.168.1.111:9100

Try infinitely until printer comes back. Intervals between connection attempts are 30 seconds. The job does not get lost when the printer is turned off. You can intentionally delay printing simply by switching off the printer. A good configuration for desktop printers and/or home users.

Overall, there is no need to mess around with bash scripts, cron jobs, lpadmin, cupsenable or sudo in order to re-activate CUPS queues going down erratically.

copying data to home directory or backup directory

Mount your new disk on your computer as /home_local or something very similar to that.

Copy data from the lagavulin fileserver either over NFS (without removing any files!):
rsync -av /home/USER/ /home_local/USER/
or over ssh directly from lagavulin:
rsync -av -e “ssh -c blowfish” lagavulin:/home1/USER/ /home_local/USER/

You can do the above when you contiune to work as usual. To do the final transfer you should also use the –delete option to rsync which remove the old files on the target if they don’t exist on the source anymore, i.e.
rsync -av –delete /home/USER/ /home_local/USER/

When doing this it is best to not be logged on and use a lot of programs. Perhaps simply logging in through a text console is the best.

Make sure that the source and target are in the correct order!

When this has been done contact someone to make sure the NFS export is working, the auto.home and auto.backup files are correct and to issue the correct commands to put the information in the NIS tables.

To do the final backup I have a script in my home directory which looks like this and is named backup.sh which I run periodically:

#!/bin/sh
BHOST="auchentoshan"
if [ "`hostname`" == "$BHOST" ]; then
  time rsync -e "ssh -c blowfish" -av --delete /home/daniels/ lagavulin:/home1/daniels/
else
  echo This is not host $BHOST
fi

In this case you should also ensure that you get the correct order for source and target directories!

Private: fix for broken rpm database

rm /var/lib/rpm/__db*
rpm –rebuilddb
yum clean all

Then run
yum update