[131047250010] |
$0
is still the current shell's invocation, not the sourced script.
[131047250050] |My best thought currently is to do source $script $script
, so that the first positional parameter contains the necessary information.
[131047250060] |Anyone have a better way?
[131047250070] |To be clear, I am sourcing the script, not running it:
[131047260010] |I think that you could use $BASH_SOURCE
variable.
[131047260020] |It returns path that was executed:
[131047260030] |So in next step we should check if path is relative or not.
[131047260040] |If it's not relative everything is ok.
[131047260050] |If it is we could check path with pwd
, concatenate with /
and $BASH_SOURCE
.
[131047270010] |Actually, "dirname $0" will get you the path to the script, but you have to interpret it a bit:
[131047270020] |You have to prepare to handle "." as the directory name under some common circumstances.
[131047270030] |I'd experiment a bit, as I remember the dirname built-in to ksh doing things a bit differently when "." appears in PATH.
[131047280010] |In tcsh
, $_
at the beginning of the script will contain the location if the file was sourced and $0
contains it if it was run.
[131047280020] |In Bash:
[131047290010] |For thoroughness and the sake of searchers, here is what these do...
[131047290020] |It is a community wiki, so feel free to add other shell's equivalents (obviously, $BASH_SOURCE will be different).
[131047290030] |test.sh:
[131047290040] |test2.sh:
[131047290050] |General Setup -> Initial RAM filesystem and RAM disk (initramfs/initrd) support -> Initramfs source file(s)
).
[131047310020] |You specify file in special format like (my init for x86):
[131047310030] |I haven't used it on ARM but it should work. /init
is file you are can put startup commands.
[131047310040] |Rest are various files needed (like busybox etc.).
[131047320010] |A few things that come to mind:
[131047320020] |/etc/cron.d/
and this directory contains the actual script called host-backup
, and also contains a cfengine backup file called host-backup.cfsaved
, as so:
[131047340060] |Does this operating system execute all files at /etc/cron.d/*
, or does it only execute files which match a certain pattern.
[131047340070] |Can I configure this, and where is this defined?
[131047340080] |I cannot find this answer in the RHEL documentation.
[131047350010] |I think the reason you've had difficulty tracking down the answer is that it's not a Red Hat-specific question.
[131047350020] |The problem you're seeing is part of the standard functionality of cron
- each file in the directory you identify is automatically treated as a separate job.
[131047350030] |So, the short answer to your question is "yes, all files are executed".
[131047350040] |This is not something I think that can be configured.
[131047360010] |(If you're paying for Red Hat support, you should ask them this kind of questions.
[131047360020] |This is exactly what you're paying for!)
[131047360030] |From the RHEL5 crontab(5)
man page:
[131047360040] |If it exists, the /etc/cron.d/
directory is parsed like the cron spool directory, except that the files in it are not user-specific and are therefore read with /etc/crontab
syntax (the user is specified explicitly in the 6th column).
[131047360050] |(Is there a simpler way of reading RHEL man pages without having access to it?
[131047360060] |At least this way I could see that this paragraph is part of the Red Hat patch, so it's not a standard Vixie Cron 4.1 feature.)
[131047360070] |Looking at the source, I see that the following files are skipped: .*
, #*
, *~
. *.rpmnew
, *.rpmorig
, *.rpmsave
.
[131047360080] |So yes, your *.cfsaved
files are read in addition to the originals.
[131047370010] |Here is the answer from RedHat support:
[131047370020] |Please be informed that all files under cron.d directory are examined and executed, it's basically an extension of /etc/crontab file (ie; same effect if you add the entries to /etc/crontab file)
[131047370030] |So, to answer my question "Does this operating system execute all files at /etc/cron.d/*, or does it only execute files which match a certain pattern.
[131047370040] |Can I configure this, and where is this defined?"
[131047370050] |All files under /etc/cron.d/* are executed (Although it seems that certain file extensions such as .rpmsave, *~, etc are ignored, according to documentation in the source files).
[131047370060] |It is not possible to configure this via a configuration file.
[131047370070] |Configuring this is probably possible if the source is recompiled.
[131047370080] |This behavior is mentioned in the documentation contained with the source, but doesn't appear in any manual or man page that I can find.
[131047380010] |rpm2cpio
, e.g.
[131047400030] |There's also a portable rpm2cpio
script if you don't want or can't get the version that's bundled with the rpm
utility (the script may not work with older or newer versions of the rpm format though).
[131047410010] |I would think that (like Windows and Linux) any archiver program should be able to decompress it. iArchiver, the unArchiver, and Archiver all list "read-only RPM" in their supported formats.
[131047420010] |/usr/bin/time
, where I could run that command and pass it the commandline I want it to run and limit.
[131047430010] |From within a program, call setrlimit(RLIMIT_CPU, ...)
.
[131047430020] |From the shell, call ulimit -t 42
(this is not standard but supported by most shells (including bash and ksh) on most unix variants).
[131047430030] |This causes the current process to be killed once it has used up N seconds of CPU time.
[131047430040] |The limitation is inherited by child processes.
[131047430050] |A common shell idiom is (ulimit -t 42; runaway_process)
if you want to be able to run other unlimited processes afterwards from the same shell.
[131047430060] |See also Is there a way to limit the amount of memory a particular process can use in Unix? .
[131047430070] |The principle is the same, you're just limiting a different resource.
[131047440010] |In addition to Gilles answer there is cpulimit tool that does exactly what you want - including modifing in runtime.
[131047440020] |Additionally it can limit to only certain CPUs/Cores IIRC.
[131047450010] |/etc/passwd
file has as the last item on a user's line the program to be run upon login.
[131047500020] |For normal users this is typically set to /bin/sh
or other shell (e.g. bash, zsh).
[131047500030] |Traditionally, identities that are used to own processes or files or other resources have their "shell" set to /bin/false
as in
[131047500040] |The pseudo-user syslog owns /var/log/syslog
and is the only UID which has write permission for that file, but one cannot log in as user syslog as there is no command interpreter to be run.
[131047510010] |Regarding the question you are trying to answer:
[131047510020] |Create a user john who should not get an interactive shell.
[131047510030] |The question means "Create a user named john
who won't be able to log in and run commands from a shell".
[131047510040] |Interactive describe the way the shell works: the user types something, and the shell does something accordingly (there is a kind of communication between the user and the shell).
[131047510050] |The word "interactive" doesn't really add any information to the question, because as long as there is a user typing commands, the shell session is interactive.
[131047510060] |There is also the non-interactive mode, where the user saves a series of commands ey wants to run in a file (called a shell script), and executes the file afterwards.
[131047510070] |Some commands behave differently depending on whether it is run interactively or non-interactively.
[131047510080] |You can read more here (this document is for bash
, but the same concept applies for other shells).
[131047510090] |On how to create such a user, if you skim through man useradd
there is an option to set the login shell, -s
or --shell
.
[131047510100] |You can also create a user normally, in anyway you want (which I assume you know already?), then edit /etc/passwd
like in msw's answer.
[131047520010] |du -sh /dir/*
by size?
[131047550030] |I read one site that said use | sort -n
but that's obviously not right.
[131047550040] |Here's an example that is wrong.
[131047560010] |You can use du -sh * | sort -h
.
[131047560020] |This tells sort that the input is the human-readable format.
[131047560030] |This feature was added recently to GNU Core Utilities 7.5 in Aug 2009, so many distributions do not yet have it.
[131047570010] |This little Perl script does the trick.
[131047570020] |Save it as duh
(or whatever you want) and call it with duh /dir/*
[131047580010] |Try using the -k flag to count 1K blocks intead of using human-readable.
[131047580020] |Then, you have a common unit and can easily do a numeric sort.
[131047580030] |You don't explictly require human units, but if you did, then there are a bunch of ways to do it.
[131047580040] |Many seem to use the 1K block technique above, and then make a second call to du.
[131047580050] |http://serverfault.com/questions/62411/how-can-i-sort-du-h-output-by-size
[131047590010] |If you don't have sort -h
you can do this:
[131047590020] |This gets the du list, separates the suffix, and sorts using that.
[131047590030] |Since there is no suffix for <1K, the first sed adds a B (for byte).
[131047590040] |The second sed adds a delimiter between the digit and the suffix.
[131047590050] |The third sed converts G to Z so that it's bigger than M; if you have terabyte files, you'll have to convert G to Y and T to Z. Finally, we sort by the two columns, then we replace the G suffix.
[131047600010] |If you don't have a recent version of GNU coreutils, you can call du
without -h
to get sortable output, and produce human-friendly output with a little postprocessing.
[131047600020] |This has the advantage of working even if your version of du
doesn't have the -h
flag.
[131047600030] |If you want SI suffixes (i.e. multiples of 1000 rather than 1024), change 1024 to 1000 in the while
loop body.
[131047600040] |(Note that that 1000 in the condition is intended, so that you get e.g. 1M
rather than 1000k
.)
[131047600050] |If your du
has an option to display sizes in bytes (e.g. -b
or -B 1
— note that this may have the side effect of counting actual file sizes rather than disk usage), add a space to the beginning of s
(i.e. s=" kMGTEPYZ";
), or add if (x<1000) {return x} else {x/=1024}
at the beginning of the human
function.
[131047600060] |Displaying a decimal digit for numbers in the range 1–10 is left as an exercise to the reader.
[131047610010] |Here's what I use on Ubuntu 10.04, CentOS 5.5, FreeBSD and Mac OS X.
[131047610020] |I borrowed the idea from www.geekology.co.za/ and earthinfo.org, as well as the infamous ducks from "Linux Server Hacks" by O'Reilly.
[131047610030] |I am still adapting it to my needs.
[131047610040] |This is still a work in progress (As in, I was working on this on the train this morning.):
[131047610050] |Here's the output:
[131047620010] |usr/gen_init_cpio
in the kernel source tree to build the cpio archive during the kernel build.
[131047670040] |That's indeed a good way of building a cpio archive without having to populate the local filesystem first (which would require being root to create all the devices, or using fakeroot or a FUSE filesystem which I'm not sure has been written already).
[131047670050] |All you're missing is generating the input file to gen_init_cpio
as a build step.
[131047670060] |E.g. in shell:
[131047670070] |If you want to reflect the symbolic links to busybox that are present in your build tree, here's a way (I assume you're building on Linux):
[131047670080] |Here's a way to copy all your symbolic links:
[131047670090] |For busybox, maybe your build tree doesn't have the symlinks, and instead you want to create one for every utility that you've compiled in.
[131047670100] |The simplest way I can think of is to look through your busybox build tree for .*.o.cmd
files: there's one per generated command.
[131047680010] |If you are in the busybox shell (ash) you don't need to worry about aliases as they will be run as commands by default IIRC.
[131047680020] |Anyway busybox --help
gives list of supported commands.
[131047680030] |In my case they are:
[131047680040] |In case of first method you create by mknod(1)
command.
[131047680050] |For example:
[131047690010] |The first few lines of the initscript in my initramfs are simply:
[131047690020] |Creates the symlinks for you..
[131047690030] |Only takes an unmeasurably small amount of time on my 500Mhz board, possibly longer on very low hardware, but likely manageable.
[131047690040] |Saves a bunch of issues remembering to create all the right links when you update BB...
[131047700010] |dpkg --search /bin/ls
gives:
[131047720030] |That is, the file /bin/ls
belongs to the Debian package named coreutils. (see this post if you are interested in a package containing a file that isn't installed)
[131047720040] |What is the Fedora equivalent?
[131047730010] |You can use rpm -qf /bin/ls
to figure out what package your installed version belongs to:
[131047730020] |Update: Per your comment, the following should work if you want only the name of the package (I just got a chance to test):
[131047730030] |You can also use yum provides /bin/ls
to get a list of all available repository packages that will provide the file:
[131047740010] |atime
fully enabled, you can set this in /etc/fstab
the current defaults is relatime
but you want to use just atime
.
[131047760030] |Everytime a file is access the timestamp will get updated.
[131047760040] |Then do some usage for a few days, to see which files have never had their atime updated.
[131047760050] |I would do all of this in a vm, and very carefully. because I imagine there are a few files that are read when the system is in read-only mode. note: set it to noatime
once you're ready for production, otherwise you'll do a write everytime you read, this is inefficient.
[131047760060] |Though to be honest, I'd look at Damn Small Linux do you really need to be smaller than that? build based on their distro and simply remove the window manager and a few extra programs.... leave all the command line tools, that way if you ever need to repair or reload you have the shell.
[131047770010] |Actively use your system for a while with file access times enabled.
[131047770020] |See what files never have their access time modified.
[131047770030] |These are candidates for deletion (but make sure there isn't a reason to keep them, e.g. because they're hardware drivers for hardware you don't have, or they're needed early in the boot process when the root partition is still mounted read-only).
[131047770040] |Since you'll have few big applications, check what libraries are used by a single executable.
[131047770050] |Consider linking them statically.
[131047780010] |Where exactly are you starting from?
[131047780020] |Are you stripping an existing distro?
[131047780030] |Is there a reason you have to start with any distro?
[131047780040] |You might want to consider building an embedded system from scratch and load only what you know you need.
[131047790010] |assuming you are using Debian or it's derivatives
[131047790020] |After some days of (heavy) usage, run popularity-contest
.
[131047790030] |It will display the oldest unused packages at the bottom.
[131047790040] |Uninstall those, but with a watchful eye on whether or not there's stuff depending on them installed.
[131047790050] |Here's a snippet of the output:
[131047790060] |The colums mean atime, ctime, package-name, and file accessed.
[131047800010] |ioctl()
method in file_operations
.
[131047830030] |Instead I found two new calls: unlocked_ioctl()
and compat_ioctl()
.
[131047830040] |What is the difference between ioctl()
, unlocked_ioctl()
, and compat_ioctl()
?
[131047840010] |Meta-answer: All the raw stuff happening to the Linux kernel goes through lkml (the Linux kernel mailing list).
[131047840020] |For explicative summaries, read or search lwn (Linux weekly news).
[131047840030] |Answer: From The new way of ioctl() by Jonathan Corbet:
[131047840040] |ioctl()
is one of the remaining parts of the kernel which runs under the Big Kernel Lock (BKL).
[131047840050] |In the past, the usage of the BKL has made it possible for long-running ioctl()
methods to create long latencies for unrelated processes.
[131047840060] |Follows an explanation of the patch that introduced unlocked_ioctl
and compat_ioctl
.
[131047840070] |The removal of the ioctl
field happened a lot later.
[131047840080] |Explanation: When ioctl
was executed, it took the Big Kernel Lock (BKL), so nothing else could execute at the same time.
[131047840090] |This is very bad on a multiprocessor machine, so there was a big effort to get rid of the BKL.
[131047840100] |First, unlocked_ioctl
was introduced.
[131047840110] |It lets each driver writer choose what lock to use instead.
[131047840120] |This can be difficult, so there was a period of transition during which old drivers still worked (using ioctl
) but new drivers could use the improved interface (unlocked_ioctl
).
[131047840130] |Eventually all drivers were converted and ioctl
could be removed.
[131047840140] |compat_ioctl
is actually unrelated, even though it was added at the same time.
[131047840150] |Its purpose is to allow 32-bit userland programs to make ioctl
calls on a 64-bit kernel.
[131047840160] |The meaning of the last argument to ioctl
depends on the driver, so there is no way to do a driver-independent conversion.
[131047850010] |There are cases when the replacement of (include/linux/fs.h) struct file_operations method ioctl() to compat_ioctl() in kernel 2.6.36 does not work (e.g. for some device drivers) and unlocked_ioctl() must be used.
[131047860010] |/
in /folder/remote/
, and the placement of --exclude='*'
after the include rules, are important.)
[131047880020] |In shells that support brace expansion (e.g. bash, ksh, zsh):
[131047880030] |Add --include='*/' --prune-empty-dirs
if you want to copy files in subdirectories as well.
[131047890010] |KDE
[131047890020] |For users on Linux and Unix, KDE offers a full suite of user workspace applications which allow interaction with these operating systems in a modern, graphical user interface.
[131047890030] |This includes Plasma Desktop, KDE's innovative desktop interface.
[131047890040] |Other workspace applications are included to aid with system configuration, running programs, or interacting with hardware devices.
[131047890050] |While the fully integrated KDE Workspaces are only available on Linux and Unix, some of these features are available on other platforms.
[131047890060] |In addition to the workspace, KDE produces a number of key applications such as the Konqueror web browser, Dolphin file manager and Kontact, the comprehensive personal information management suite.
[131047890070] |However, our list of applications includes many others, including those for education, multimedia, office productivity, networking, games and much more.
[131047890080] |Most applications are available on all platforms supported by the KDE Development.
[131047890090] |KDE also brings to the forefront many innovations for application developers.
[131047890100] |An entire infrastructure has been designed and implemented to help programmers create robust and comprehensive applications in the most efficient manner, eliminating the complexity and tediousness of creating highly functional applications.
[131047890110] |It is our hope and continued ambition that the KDE team will bring open, reliable, stable and monopoly-free computing to the everyday user.
[131047900010] |http://kde.org
[131047910010] |make
gives:
[131047930030] |Output of ./configure
:
[131047930040] |UPDATE: I'm no longer experiencing this problem and have no idea what fixed it.
[131047940010] |You're probably missing the GTK-Doc tools to generate documentation.
[131047940020] |One way to find out these dependencies by looking at what distributions do to build the package.
[131047940030] |For example on Debian, in debian/control
, the dependencies (except Debian-specific stuff) are
[131047940040] |m4, libltdl-dev | libltdl7-dev (>= 2.2.6), libasound2-dev, libvorbis-dev, libgtk2.0-dev (>= 2.20), tdb-dev (> 1.1), gtk-doc-tools, libpulse-dev (>= 0.9.11), libgstreamer0.10-dev (>= 0.10.15)
[131047950010] |guest
as username and no password.
[131047970020] |It seems to me that sometimes Ubuntu forgets to try with the guest credentials.
[131047980010] |screen -S somename -Rrd
application
press Ctrl+A D to “detach” from the screen session, leaving it running in the background
[131048010040] |From the client: ssh server
screen -S somename -Rrd
to reconnect to the screen session
[131048010050] |If you want messages to be recorded automatically, the best way is to use the standard log facility.
[131048010060] |You can arrange for log entries to be sent to other machines, either crudely with most basic syslogs, or with better filtering and dispatching options with rsyslog.
[131048020010] |I think that in this case better than redirection output to file is redirect it to named pipe (fifo), because there is no need to store all data on disk.
[131048020020] |If program produces a lot of output we could run out of disk space.
[131048020030] |Instead of a conventional, unnamed, shell pipeline, a named pipeline makes use of the filesystem.
[131048020040] |It is explicitly created using mkfifo() or mknod(), and two separate processes can access the pipe by name — one process can open it as a reader, and the other as a writer.
[131048020050] |If you want to output it also to stdout you could use tee
:
[131048030010] |I am looking for a solution which does need to start server in a special way because I am not allow to start the program and cannot even stop and restart in a special environment.
[131048030020] |Someone starts program on the server and I have to check message output in the terminal when there are troubles.
[131048030030] |Any idea how to achieve?
[131048040010] |yum search something
I get:
[131048040030] |How to fix?
[131048050010] |just try:
[131048050020] |and enter your root pw.
[131048060010] |$COMMAND_LINE
, which is available on my Ubuntu system, but I'm not sure if it's standard.
[131048070030] |command invoked from ...
version has additional environment variable set, the variable name is COMMAND_LINE and contains (as its name indicates) contents of the current (already typed in) command line.
[131048070040] |One can examine and use contents of the COMMAND_LINE variable in her custom script to build more sophisticated completions (see completion for svn(1) included in this package).
[131048070050] |Failing that, you would experiment with history expansions such as !!
or !#$
, but I'm not sure if that will work.
[131048080010] |stty sane
, or more specifically stty echo
, should turn echo back on. (stty sane
will fix other terminal input or output oddities such as newlines not going back to the left margin.)
[131048120020] |Ssh (and most other programs) turn echo off for the password prompt, i.e., the characters you type are not displayed (echoed) to the screen. stty -echo
is a shell command with the same effect.
[131048120030] |Normally echo should be turned back on (like stty echo
) after the password prompt; this is a bug in either ssh or some other software at play here such as your system libraries or terminal emulator.
[131048130010] |.sql
, the command becomes:
[131048140040] |If you come from DOS/Windows, it may not be clear to you why this works.
[131048140050] |On Unixy systems, the shell expands wildcards, so the program (mysqlimport
in this case) doesn't have to have its own processing.
[131048140060] |That's why the usage message you quote says it expects the files to be given individually: that's how it will see the files if you use commands like the above.
[131048140070] |The program only sees the wildcard if the pattern doesn't match anything; the shell passes it on literally to the program, having no better way of handling it.
[131048150010] |package-cleanup
is in yum-utils package, which is available for installation via the Fedora repositories.
[131048170010] |example.so.1
example.so.2
example.so
is in fact a symbolic link to example.so.2
where .2
is the latest version.
[131048170080] |How then does an application depending on an older version of example.so
identify it correctly?
[131048170090] |Are there any rules as to what numbers one must use?
[131048170100] |Or is this simply convention?
[131048170110] |Is it the case that, unlike in Windows where software binaries are transferred between systems, if a system has a newer version of a shared object it is linked to the older version automatically when compiling from source?
[131048170120] |I suspect this is related to ldconfig
but I'm not sure how.
[131048180010] |Binaries themselves know which version of a shared library they depend on, and request it specifically.
[131048180020] |You can use ldd
to show the dependencies; mine for ls
are:
[131048180030] |As you can see, it points to e.g. libpthread.so.0
, not just libpthread.so
.
[131048180040] |The reason for the symbolic link is for the linker.
[131048180050] |When you want to link against libpthread.so
directly, you give gcc
the flag -lpthread
, and it adds on the lib
prefix and .so
suffix automatically.
[131048180060] |You can't tell it to add on the .so.0
suffix, so the symbolic link points to the newest version of the lib to faciliate that
[131048190010] |The numbers in the shared libraries are convention used in Linux to identify the API of a library.
[131048190020] |Typically the format is:
[131048190030] |And as you noticed usually there is a symbolic link from libFOO.so to libFOO.MAJOR.MINOR.so
[131048190040] |The MAJOR is typically incremented when the API changes (new entry points are removed or the parameters or types changed).
[131048190050] |The MINOR typically is incremented for bug fix releases or when new APIs are introduced without breaking existing APIs.
[131048190060] |The ldconfig command is responsible for creating the libFOO.so link to the latest version of libFOO.MAJOR.MINOR.so
[131048190070] |A more extensive discussion can be found here:
[131048190080] |http://www.ibm.com/developerworks/web/library/l-shlibs.html
[131048200010] |libNAME.so is the filename used by the compiler/linker when first looking for a library specified by -lNAME.
[131048200020] |Inside a shared library file is a field called the SONAME.
[131048200030] |This field is set when the library itself is first linked into a shared object (so) by the build process.
[131048200040] |This SONAME is actually what a linker stores in an executable depending on that shared object is linked with it.
[131048200050] |Normally the SONAME is in the form of libNAME.so.MAJOR and is changed anytime the library becomes incompatible with existing executables linked to it and both major versions of the library can be kept installed as needed (though only one will be pointed to for development as libNAME.so) Also, to support easily upgrading between minor versions of a library, libNAME.so.MAJOR is normally a link to a file like libNAME.so.MAJOR.MINOR.
[131048200060] |A new minor version can be installed and once completed, the link to the old minor version is bumped to point to the new minor version immediately upgrading all new executions to use the upgraded library.
[131048200070] |Also, see my answer to Linux, GNU GCC, ld, version scripts and the ELF binary format -- How does it work??
[131048210010] |n
drop anything from program A to program B, it works.
[131048240020] |You cant change workspace while draging files so you need to use the same workspace.
[131048240030] |I just tried so it actually works ;))
[131048250010] |In X11, drag and drop is something that the application must support, it has nothing to do with the window manager.
[131048250020] |For example: you cannot drag'n'drop anything in a xcalc
window, even with the Compiz window manager.
[131048250030] |The X11 drag and drop protocol is called XDND: see http://www.newplanetsoftware.com/xdnd/ for more information.
[131048260010] |syslog-ng.conf
is primarily from the Gentoo Security Handbook and thus simply using the the .pacnew
file won't work
[131048260040] |here's my current conf file
[131048270010] |Hi, Its probably related to this change in 3.2:
[131048270020] |pdftotext
from poppler has already been mentioned.
[131048300030] |There's a Haskell program called pdf2line
which works well.
[131048300040] |calibre's ebook-convert
commandline program (or calibre itself) is another option; it can convert PDF to plain text, or other ebook-format (RTF, ePub), in my opinion it generates better results than pdftotext, although it is considerably slower.
[131048300050] |ebook-convert file.pdf file.txt
[131048300060] |AbiWord can convert between any formats it knows from the command-line, and at least optionally has a PDF import plugin:
[131048300070] |abiword --to=txt file.pdf
[131048300080] |Yet another option is podofotextextract
from the podofo PDF tools library.
[131048300090] |I haven't really tried that.
[131048300100] |If you combine the two Ghostscript tools, pdf2ps
and ps2ascii
, you have yet another option.
[131048300110] |I can actually think of a few more methods, but I'll leave it at that for now. ;)
[131048310010] |You can convert PDFs to text on the command line with pdftotext (Ubuntu: poppler-utils ; OpenBSD: xpdf-utils
package).
[131048310020] |You can use Recoll (Ubuntu: recoll ; OpenBSD: no port, but there's one for FreeBSD.) to search inside various formatted text document types, including PDF.
[131048310030] |There's a GUI, and it builds an index automatically under the hood.
[131048310040] |It uses pdftotext
to convert PDF to text.
[131048310050] |Acrobat Reader (at least version 9 under Linux) has a limited multiple-file search capability (you can search in all the files in a directory).
[131048320010] |/etc/rc.local
, another custom init script, or (if available) an upstart script.
[131048330050] |I recommend using the full path of the executable inside /etc/rc.local
or in an custom init script.
[131048330060] |On my system this is /sbin/rfkill
, but can be found using the command which rfkill
.
[131048330070] |Thus on my system, I would place the following command within /etc/rc.local
somewhere before exit 0
:
[131048330080] |Depending on your Debian setup, you may not have /etc/rc.local
.
[131048330090] |In this case, a custom init script may be the way to go.
[131048330100] |The init script could be saved at /etc/init.d/disable-bluetooth
and contain something like:
[131048330110] |Then ensure the command is executable (chmod 755
) and add it to startup (update-rc.d disable-bluetooth defaults
).
[131048330120] |An example of an upstart upstart script would be a file named /etc/init/disable-bluetooth.conf
containing something like:
[131048330130] |rfkill
uses /dev/rfkill
which is an interface provided by the Linux kernel.
[131048340010] |xev
running gives
[131048340060] |Could this problem be happening because of file corruption?
[131048340070] |What file would I check for corruption?
[131048340080] |I've done an fsck on the system drive —by running tune2fs -C 200 /dev/sda3
before rebooting— which seems to have come up clean.
[131048340090] |I.E.
[131048340100] |I'm running an updated (last dist-upgrade done yesterday) ubuntu 10.10.
[131048350010] |I've realized that this was happening because of a typo I made when manually editing my xfce keyboard shortcuts file.
[131048350020] |Specifically, the file ~/.config/xfce4/xfconf/xfce-perchannel-xml/xfce4-keyboard-shortcuts.xml
used the modifier Meta5
(which doesn't exist) instead of Mod5
to modify the p key.
[131048350030] |I did note that no errors were recorded in ~/.xsession-errors
, despite the fact that xfce seems to register things there.
[131048350040] |It may be useful to some people to note that one of my reasons for editing the file was in order to make the same shortcuts work with or without the Keyboard Layouts applet being loaded.
[131048350050] |Depending on whether or not that applet is loaded, the "windows" key will register as either
or
.
[131048360010] |mplayer
takes a -softvol
flag that makes it use the software audio mixer instead of the sound card.
[131048370020] |If you want it on permanently, you can add the following to ~/.mplayer/config
:
[131048380010] |/proc/sys/vm/swappiness
to control the ratio of swapping vs keeping things in memory.
[131048410020] |A value of 0 completely avoids swapping at all costs.
[131048410030] |This can be done using either:
[131048410040] |echo 0 >/proc/sys/vm/swappiness
sysctl -w vm.swappiness=0
/etc/sysctl.conf
ssh $host "sudo su user -c '$CMD'"
?
[131048420070] |Is there a general recipe for managing quotes in such scenarios?..
[131048430010] |How about using more double quotes?
[131048430020] |Then your ssh $host $CMD
should work just fine with this one:
[131048430030] |CMD="pgrep -fl java | grep -i datanode | awk '{print $1}'"
[131048430040] |Now to the more complex one, the ssh $host "sudo su user -c \"$CMD\""
.
[131048430050] |I guess all you have to do is escape sensitive characters in CMD
: $
, \
and "
.
[131048430060] |So I'd try and see if this works: echo $CMD | sed -e 's/[$\\"]/\\\1/g'
.
[131048430070] |If that looks OK, wrap the echo+sed into a shell function, and you are good to go with ssh $host "sudo su user -c \"$(escape_my_var $CMD)\""
.
[131048440010] |Dealing with multiple levels of quoting (really, multiple levels of parsing/interpretation) can get complicated.
[131048440020] |It helps to keep a few things in mind:
[131048440030] |sh -c …
) and that shell interprets the string.
[131048440130] |Then you asked about adding another shell level in the middle by using su (via sudo, which does not interpret its command arguments, so we can ignore it).
[131048440140] |At this point, you have three levels of nesting going on (awk → shell, shell → shell (ssh), shell → shell (su user -c), so I advise using the “bottom, up” approach.
[131048440150] |I will assume that your shells are Bourne compatible (e.g. sh, ash, dash, ksh, bash, zsh, etc.).
[131048440160] |Some other kind of shell (fish, rc, etc.) might require different syntax, but the method still applies.
[131048440170] |\\
and \'
result in \
and '
, but other backslash sequences are actually literal).
[131048440310] |You will have to read the documentation for each of your languages to understand its quoting rules and the overall syntax.
[131048440320] |$
in the awk program.
[131048440360] |The obvious choice is to use single quote in the shell around the whole program.
[131048440370] |'{print $1}'
{print\ \$1}
directly escape the space and $
{print' $'1}
single quote only the space and $
"{print \$1}"
double quote the whole and escape the $
{print" $"1}
double quote only the space and $
This may be bending the rules a bit (unescaped $
at the end of a double quoted string is literal), but it seems to work in most shells.'{print $1}'
and embed it in the rest of the shell “code”:
[131048440450] |Next, you wanted to run this via su and sudo.
[131048440460] |su user -c …
is just like some-shell -c …
(except running under some other UID), so su just adds another shell level. sudo does not interpret its arguments, so it does not add any quoting levels.
[131048440470] |We need another shell level for our command string.
[131048440480] |We can pick single quoting again, but we have to give special handling to the existing single quotes.
[131048440490] |The usual way looks like this:
[131048440500] |There are four strings here that the shell will interpret and concatenate: the first single quoted string (pgrep … awk
), an escaped single quote, the single-quoted awk program, another escaped single quote.
[131048440510] |There are, of course many alternatives:
[131048440520] |pgrep\ -fl\ java\ \|\ grep\ -i\ datanode\ \|\ awk\ \'{print\ \$1}
escape everything importantpgrep\ -fl\ java\|grep\ -i\ datanode\|awk\ \'{print\$1}
the same, but without superfluous whitespace (even in the awk program!)"pgrep -fl java | grep -i datanode | awk '{print \$1}'"
double quote the whole thing, escape the $
'pgrep -fl java | grep -i datanode | awk '"'"'{print \$1}'"'"
your variation; a bit longer than the usual way due to using double quotes (two characters) instead of escapes (one character)'pgrep -fl java | grep -i datanode | awk "{print \$1}"'
'pgrep -fl java | grep -i datanode | awk {print\ \$1}'
ssh host …
).
[131048440610] |Next, you added a level of ssh on top.
[131048440620] |This is effectively another shell level: ssh does not interpret the command itself, but it hands it to a shell on the remote end (via (e.g.) sh -c …
) and that shell interprets the string.
[131048440630] |The process is the same: take the string, pick a quoting method, use it, embed it.
[131048440640] |Using single quotes again:
[131048440650] |Now there are eleven strings that are interpreted and concatenated: 'sudo su user -c '
, escaped single quote, 'pgrep … awk '
, escaped single quote, escaped backslash, two escaped single quotes, the single quoted awk program, an escaped single quote, an escaped backslash, and a final escaped single quote.
[131048440660] |The final form looks like this:
[131048440670] |This is a bit unwieldy to type by hand, but the literal nature of the shell’s single quoting makes it easy to automate a slight variation:
[131048450010] |See Chris Johnsen's answer for a clear, in-depth explanation with a general solution.
[131048450020] |I'm going to give a few extra tips that help in some common circumstances.
[131048450030] |Single quotes escape everything but a single quote.
[131048450040] |So if you know the value of a variable doesn't include any single quote, you can interpolate it safely between single quotes in a shell script.
[131048450050] |If your local shell is ksh93 or zsh, you can cope with single quotes in the variable by rewriting them to '\''
.
[131048450060] |(Although bash also has the ${foo//pattern/replacement}
construct, its handling of single quotes doesn't make sense to me.)
[131048450070] |Another tip to avoid having to deal with nested quoting is to pass strings through environment variables as much as possible.
[131048450080] |Ssh and sudo tend to drop most environment variables, but they're often configured to let LC_*
through, because these are normally very important for usability (they contain locale information) and are rarely considered security sensitive.
[131048450090] |Here, since LC_CMD
contains a shell snippet, it must be provided literally to the innermost shell.
[131048450100] |Therefore the variable is expanded by the shell immediately above.
[131048450110] |The innermost-but-one shell sees "$LC_CMD"
, and the innermost shell sees the commands.
[131048450120] |A similar method is useful to pass data to a text processing utility.
[131048450130] |If you use shell interpolation, the utility will treat the value of the variable as a command, e.g. sed "s/$pattern/$replacement/"
won't work if the variables contain /
.
[131048450140] |So use awk (not sed), and either its -v
option or the ENVIRON
array to pass data from the shell (if you go through ENVIRON
, remember to export the variables).
[131048460010] |awk
,perl
,sed
, and others.
[131048510020] |Here is a rather simplistic options that uses tr
to turn this problem back into one a problem that we know how to solve--finding a pattern within a line:
[131048510030] |The tr 'C' '\n'
command translates any "C" in the input into a newline character.
[131048510040] |Thus, it is then necessary to just pipe it into a command that will output the text between A and B and between B and the end of the line.
[131048510050] |If A, B, and C are regular expressions rather than simple characters, try:
[131048510060] |This uses the same basic idea, but uses sed
to create the newlines.
[131048520010] |Awk generalizes the notion of lines to record, which can be terminated by any character.
[131048520020] |Several implementations, such as Gawk, support an arbitrary regular expression as the record separator.
[131048520030] |Untested:
[131048530010] |.css
file.
[131048540050] |If you find it, you can then find the elements you are looking for and just tweak the CSS.
[131048540060] |I'll post back the required path to the file and the rules to be tweaked.
[131048550010] |Blender's answer pointed me in the right direction.
[131048550020] |I didn't actually modify those files, but what I did instead was created a file ~/.mozilla-thunderbird/iddbnhwr.default/chrome/userChrome.css
and I put my changes in there.
[131048550030] |I made mine look like this:
[131048550040] |Analyzing the files from Blender's answer showed me that the following are the CSS selectors I wanted:
[131048550050] |#folderTree
- The list of folders on the left hand side#threadTree
- The list of messages on the top right.#msgHeaderView
- The header pane at the top of every message preview / viewer window#mailContent
- Looks like the body of mail messages?#folderUnreadCol
, #folderTotalCol
, #folderSizeCol
, #folderNameCol
- Self explanatorytreecol.flagColumnHeader
- Looks like you could change the flag icon to something else...
[131048550120] |Maybe an upvote icon? ;-)treecol.junkStatusHeader
- Same for junk icon.
[131048550140] |Just change the list-style-image: url(...)
rule.wget
and then save the output in a file called temp.html
.
[131048560040] |I tried this, but ut doens't work.
[131048560050] |Can someone explain why and/or give me a solution please?
[131048570010] |you're not actually executing your url line :
[131048580010] |wget also accepts stdin with the -
switch.
[131048580020] |If you want to save the output in a file, use the -O
switch.
[131048590010] |You can use backticks (`) to evaluate a command and substitute in the command's output, like:
[131048590020] |In your case:
[131048600010] |You could use "xargs".
[131048600020] |A trivial example:
[131048600030] |You would have to take care that xargs doesn't split its stdin into two or more invocations of the comman ("cat" in the example above).
[131048610010] |It seems you could use a combination of the answers here.
[131048610020] |I'm guessing you are wanting to replace space chars with their escaped ascii values in the url.
[131048610030] |To do this, you need to replace them with "%20", not just "%".
[131048610040] |Here's a solution that should give you a complete answer:
[131048610050] |The backticks indicate that the enclosed command should be interpreted first, and the result sent to wget.
[131048610060] |Notice I escaped the space and % chars in the sed command to prevent them from being misinterpreted.
[131048610070] |The -q option for wget prevents processing output from the command being printed to the screen (handy for scripting when you don't care about the in-work status) and the -O option specifies the output file.
[131048610080] |FYI, if you don't want to save the output to a file, but just view it in the terminal, use "-" instead of a filename to indicate stdout.
[131048620010] |pts/3
, and your friend's is ?
, which means it's detached from the terminal.
[131048630030] |You could see where the output is going with ls -l /proc/7494/fd/
(where 7494 is the process ID of your friend's process) — although if you're not running as root, you probably can't even look, for security reasons.
[131048630040] |(So try sudo ls -l /proc/7494/fd/
.)
[131048630050] |There are horrible, horrible, kludgy things you might be able to do to change where the output of the program goes.
[131048630060] |But in general, you can't and shouldn't.
[131048630070] |If your friend wants to share the output with you, and approach would be to redirect the output of the program to a file, and then make that file readable by you:
[131048630080] |(Where in this case "readable by you" is "readable by everyone"; with a little more work you can set up a shared group so just the two of you can exchange output.)
[131048630090] |(And be aware that python buffers output by default — turning that off is what the -u
is for.)
[131048640010] |If you have root access on the machine and your friend is willing to execute some commands, it is possible:
[131048640020] |screen
has to be setuid root: chmod u+s /usr/bin/screen
screen
, he can give the session a name, makes it easier: screen -S "shared_session"
Ctrl-a :multiuser on
Ctrl-a :acladd you
Ctrl-a :aclchg you -w "#"
screen -x friend/shared_session
screen
is a very comprehensive tool, and can do a lot more than what I've described.
[131048660070] |While in a screen session, try ctrl+a,? to learn a few common commands.
[131048660080] |Probably the most common are:
[131048660090] |screen -d -r
to ensure that if another shell is attached to my screen session, it will be detached before I resume it on my current system.top
.
[131048760020] |Here is some output:
[131048760030] |It's not very script friendly.
[131048760040] |Here's ps aux
:
[131048760050] |Try playing with those.
[131048760060] |I'm not sure what blocked processes are, but these commands should help.
[131048760070] |Good luck!
[131048770010] |Building on Blender's answer, to get the number of running processes the following can be used:
[131048770020] |To get the number of processes in Uninterruptible Sleep you can use(Edit Changed 'D' to 'U', thanks Gilles!):
[131048780010] |°
) is Option+Shift+8.
[131048780040] |But I'm writing the email in Thunderbird on an Ubuntu 10.10 with the default US English keyboard layout.
[131048780050] |What key combination do I use to get the degree symbol under X11?
[131048780060] |EDIT: Gert successfully answered the question... but, bonus points for any easier to use keystroke than what's in his answer!
[131048790010] |Ctrl + Shift + u (this will show an underlined u) and then the unicode value (in this case B0
) and follow it by an enter.
[131048800010] |You can also use + + 0
[131048810010] |Set up a Compose key.
[131048810020] |On Ubuntu, this is easily done in the keyboard preferences, “Layout” tab, “Options” subdialog. Caps Lock is a good choice as it's pretty much useless (all remotely serious editors have a command to make the selection uppercase for the rare times it's needed).
[131048810030] |Press Compose followed by two characters (occasionally three) to enter a character you don't have on your keyboard.
[131048810040] |Usually the resulting character combines the two characters you type, for example Compose ' a enters á
and Compose s s enters ß
.
[131048810050] |The degree symbol °
is one of the less memorable combinations, it's on Compose o o.
[131048820010] |darcs show repo
and use $?
to get its return code.
[131048820060] |My question is: is there a neat way to run and return the return code number in one line? for example
[131048820070] |Or do I have to define a function?
[131048820080] |An added requirement is that both stderr and stdout should be printed.
[131048830010] |Well, it's not very pretty, but it's one way to do it inline:
[131048830020] |By definition, if tests the exit code of a command, so you don't need to do an explicit comparison, unless you want more than success or failure.
[131048830030] |There's probably a more elegant way to do this.
[131048840010] |If automatically checks the return code:
[131048840020] |You could also run the command and use &&(logical AND) or || (logical OR) afterwards to check if it succeeded or not:
[131048840030] |Redirecting stdout
and stderr
can be done once with exec
[131048840040] |The first two exec
are saving the stdin
and stderr
file descriptors, the third redirects both to /dev/null
(or somewhere other if wished).
[131048840050] |The last two exec
restore the file descriptors again.
[131048840060] |Everything in between gets redirected to nowhere.
[131048840070] |Append other repo checks like Gilles suggested.
[131048850010] |As others have already mentioned, if command
tests whether command
succeeds.
[131048850020] |In fact [ … ]
is an ordinary command, which can be used outside of an if
or while
conditional although it's uncommon.
[131048850030] |However, for this application, I would test the existence of the characteristic directories.
[131048850040] |This will be correct in more edge cases.
[131048850050] |Bash/ksh/zsh/dash version (untested):
[131048850060] |In POSIX sh, there is no -ef
(same file) construct, so a different test is needed to break out of the recursion when the root directory is reached.
[131048850070] |Replace while ! [ "$d" -ef / ];
by while [ "$(cd -- "$d"; command pwd)" != / ];
.
[131048850080] |(Use command pwd
and not pwd
because some shells track symbolic links in pwd
and we don't want that here.)
[131048860010] |ifconfig wlan0
, iwconfig
, network interface cards, and wpa_supplicant
configs:
[131048860040] |I understand that the wpa doesn't work with third-party drivers and ndiswrapper; I've been told to use wpa_supplicant instead.
[131048860050] |How can I get my windows driver to work with the Netgear WG311 wireless card?
[131048860060] |UPDATE
[131048860070] |ok had a look at the resource that Macieg gave me.
[131048860080] |Finally got a connection but after restart it is gone.
[131048860090] |the output of this command. wpa_supplicant -i wlan0 -D wext
[131048860100] |just shows wpa_supplicant help commands.
[131048860110] |UPDATE2
[131048860120] |Ok the connection comes up after restart, but only 5 minutes after i do this command
[131048860130] |Anyone know how to fix this?
[131048870010] |wpa_supplicant does have support for ndiswrapper and should be run like this:
[131048870020] |Instructions how to install ndiswrapper drivers - however I never use ndiswrapper.
[131048870030] |If it does not help you have to say what is the output of:
[131048870040] |As a side note - to use WPA you need user program called wpa supplicant.
[131048870050] |The most popular one is wpa_supplicant.
[131048870060] |Therefore the WPA stack on Linux contains of 2 elements:
[131048870070] |