[131094180010] |
Understanding top and load average
[131094180020] |I'm observing a high load average an certain machine (about 9) in all three load fields.
[131094180030] |I understand load as the number of processes in state "run" / currently desiring CPU time.
[131094180040] |Am I correct at reasoning that if N processes are running on my machine this cannot produce a load greater than N?
[131094180050] |Also, does the load count in respect to processes or threads?
[131094180060] |In other words, can a multithreaded process produce a load greater than 1?
[131094190010] |The numbers that are used to calculate load average are tasks in the run or uninterruptable state and the amount of work done in the time slice of the moving average.
[131094190020] |These tasks can be part of a multithreaded process.
[131094190030] |The fields are fuzzy the farther back in time due to smoothing results from the algorithm used.
[131094190040] |A load of 1 is equal to 100% of one CPUs worth of work.
[131094190050] |If you had a multithreaded application that managed to have a number of active threads in excess of the number of available CPUs, then you could have a single process drive the load above 1.
[131094190060] |This would likely be a short term spike and not reflected in the longer time slice views of the load average.
[131094190070] |Also, since the load average was developed before there were multi-core systems, its important to divide the load numbers by the total available number of cores.
[131094190080] |If this is a sustained load of 9 on a 4 socket quad core system, then this is a 9 of 16 load and not really a problem.
[131094200010] |Load average is usually described as "average length of run queue".
[131094200020] |So few CPU-consuming processes or threads can raise LA above 1.
[131094200030] |There is no problem if LA is less than total number of CPU cores.
[131094200040] |But if it gets higher than number of CPUs, this means some threads/processes will stay in queue, ready to run, but waiting for free CPU.
[131094210010] |How to get the char at a given position of a string in shell script?
[131094210020] |How to get the char at a given position of a string in shell script?
[131094220010] |In bash with "Parameter Expansion" ${parameter:offset:length}
[131094220020] |Edit: Without parameter expansion (not very elegant, but that's what came to me first)
[131094230010] |Alternative to parameter expansion is expr substr
[131094230020] |For example:
[131094240010] |How to suspend and resume proccesses like bash does
[131094240020] |this question is a follow-up to: How to suspend and resume proccesses
[131094240030] |I have started firefox from a bash session in gnome-terminal.
[131094240040] |The proccess tree looks like this:
[131094240050] |When I hit CTRL+Z
in bash, it will suspend firefox.
[131094240060] |When I issue the command bg
(or fg
) it will resume firefox.
[131094240070] |This is as expected.
[131094240080] |When I issue the command kill -s SIGTSTP 27980
in another terminal, it will print the line [1]+ Stopped firefox
in the first terminal (just like when i hit CTRL+Z
), but it does not suspend firefox.
[131094240090] |I asume it only suspends the shell script.
[131094240100] |When I issue the command kill -s SIGTSTP 27989
(note the PID) in another terminal, it will suspend firefox.
[131094240110] |The first terminal does not take note of this.
[131094240120] |How does bash suspend the entire proccess tree? does it just traverse the tree and SIGTSTP all of the children?
[131094250010] |Shell jobs live in "process groups"; look at the PGRP
column in extended ps
output.
[131094250020] |These are used both for job control and to determine who "owns" a terminal (real or pty).
[131094250030] |POSIX (taken from System V) uses a negative process ID to indicate a process group, since the process group is identified by the first process in the group ("process group leader").
[131094250040] |So you would use ps
to determine the process group, then kill -s TSTP -$pgrp
.
[131094250050] |(Try ps -u$USER -opid,ppid,pgrp,cmd
.)
[131094250060] |In your process tree, the process group starts with the firefox
script launched by bash
, so the process group would be 27980 and the command would be kill -s TSTP -27980
.
[131094260010] |Forcing GNU make to run commands in order
[131094260020] |With the following Makefile, GNU make runs the two commands in parallel.
[131094260030] |Since the first one takes time to finish, rm *.log
is run before the log
file is created, and fails.
[131094260040] |The file dummy.tex
one line: \bye
(a short empty file for TeX).
[131094260050] |Replacing tex dummy.tex
by any other command shows the same behaviour.
[131094260060] |Removing &>/dev/null
would of course solve the problem, but it is not a very good option in my case, since the Makefile is provided by a third party.
[131094260070] |Is it possible to prevent GNU make from doing anything in parallel? (the flag -j 1
does not help).
[131094260080] |EDIT: output to the terminal:
[131094270010] |Er? make
parallelizes targets (with -j
); it never reorders commands within a target.
[131094280010] |Actually, you don't have a problem with make, but with your command:
[131094280020] |Runs 'tex' in the background.
[131094280030] |You don't need to remove '>/dev/null', but '&' is sending 'tex' to the background.
[131094280040] |Try this, it must be fine for you:
[131094280050] |or run everything in the same subshell, like this:
[131094280060] |or less sane, this:
[131094280070] |PD: &>is an extension provided by some shells (including bash) to redirect both stdout and stderr to the same destination, but it's not portable, you should use '>/dev/null 2>&1' instead.
[131094280080] |(Thanks @Gilles)
[131094280090] |Cheers
[131094290010] |Unable to boot Chromium OS from USB drive
[131094290020] |Hello, I have tried both my own builds of Chromium OS and Hexxeh's daily Vanilla builds, and neither work.
[131094290030] |I copy them over to my flash drive, but when I try to boot from it, nothing happens except my backlight flashing on and off a few times, then a reboot.
[131094290040] |System: Compaq Presario CQ50-140US http://h10025.www1.hp.com/ewfrf/wc/document?docname=c01550071&tmp_task=prodinfoCategory&lc=en&dlc=en&cc=us&site=null&lang=en&product=3795225&key=null
[131094300010] |How does the Linux file system/organization differ from Windows?
[131094300020] |One of the things that really slows me down in catching on with Linux is the huge difference between the file system in Linux versus Windows.
[131094300030] |Up until the last 3-4 years I only used Windows systems and its only been the last 2-3 months that I've really worked at getting accustomed to Linux.
[131094300040] |I think one of the things that really bothers me at this point is that I felt like I could find just about anything I wanted to when I began digging through the various folders in Windows.
[131094300050] |I had become accustomed to what kinds of things were placed where and when.
[131094300060] |I don't have that with Linux.
[131094300070] |I'm learning a few things like the /opt folder is where most third-party programs get installed... but my understanding is limited.
[131094300080] |What things are important to know to really understand the file system and to be able to locate different files and programs and such?
[131094310010] |Have a look at the Filesystem Hierarchy Standard (FHS), which is a standard of organising directory structure.
[131094310020] |I assume most (all?) Linux systems more or less follow it.
[131094320010] |In some *nix distributions (tested on OpenBSD and Ubuntu) the man page for the file system hierarchy can be useful.
[131094320020] |Of course, this will vary depending on platform and how up to date the manual pages are.
[131094320030] |The man page (on Ubuntu) also references the Filesystem Hierarchy Standard that was pointed to earlier.
[131094320040] |I did not find a similar manual entry on solaris.
[131094320050] |In general, different linux distributions are free to put things wherever they want to, so it can get confusing quickly.
[131094320060] |Most executables that the system installs (either as part of the base system or through a package manager) will end up somewhere in your default path (usually in /usr/bin /usr/sbin /usr/local/bin /usr/local/sbin) though some distributions use /opt, /usr/share/bin, among others.
[131094320070] |A parallel could be drawn to the "Program Files" folder in windows.
[131094320080] |The "Windows" folder could be somewhat like /usr/lib /usr/include /usr/share and others.
[131094320090] |Your "My Documents" folder is somewhat parallel to your home directory on linux (/home/user) with the nice addition that most user specific configurations for programs like firefox, thunderbird, etc live here making moving to a new system fairly painless.
[131094320100] |The problem with these parallels is they are not exactly accurate.
[131094320110] |Windows and linux are different in this respect.
[131094320120] |The best way forward is to continue poking around with the system, as you probably did when you first started using windows, break things, fix them, ask questions along the way.
[131094330010] |In Windows when you install a program, it creates a folder in Program Files that contains everything.
[131094330020] |It has the binaries, the libraries, the resource files, etc.
[131094330030] |In Linux, binaries usually go in /usr/bin
, libraries go in /usr/lib
, and resources go in /usr/share
.
[131094330040] |Specifically with like an icon file, that would go in /usr/share/icons
.
[131094330050] |Fonts can be found in /usr/share/fonts
.
[131094330060] |Development headers go in /usr/include
, and lastly, configuration files go in /etc
.
[131094330070] |This layout has some cool advantages.
[131094330080] |By putting libraries into /usr/lib
, it's much easier to share libraries between applications.
[131094330090] |In Windows, it's typical to see each program install their own set of libraries into Program Files, and another app may install the same exact library so now you have two exact copies of a .dll file.
[131094330100] |It also add some consistency to the filesystem layout.
[131094330110] |In Windows, you know the program will have a folder in Program Files, but there's no guarantee on how it's layed out inside that folder.
[131094330120] |Maybe the executables will be in a bin/
folder, maybe not.
[131094330130] |Maybe it will have a conf/
folder, again maybe not.
[131094340010] |Looping through files with spaces in the names?
[131094340020] |I wrote the following script to diff the outputs of two directores with all the same files in them as such:
[131094340030] |I know there are other ways to achieve this.
[131094340040] |Curiously though, this script fails when the files have spaces in them.
[131094340050] |How can I deal with this?
[131094340060] |Example output of find:
[131094350010] |Double quoting is your friend.
[131094350020] |Otherwise the variable's contents get word-split.
[131094360010] |File names with spaces in them look like multiple names on the command line if they're not quoted.
[131094360020] |If your file is named "Hello World.txt", the diff line expands to:
[131094360030] |which looks like four file names.
[131094360040] |Just put quotes around the arguments:
[131094370010] |Short answer
[131094370020] |Long answer
[131094370030] |You have two problems:
[131094370040] |By default, the shell splits the output of a command on spaces, tabs, and newlines
[131094370050] |Filenames could contain wildcard characters which would get expanded
[131094370060] |1. Splitting only on newlines
[131094370070] |To figure out what to set file
to, the shell has to take the output of find
and interpret it somehow, otherwise file
would just be the entire output of find
.
[131094370080] |The shell reads the IFS
variable, which is which is set to
by default.
[131094370090] |Then it looks at each character in the output of find
.
[131094370100] |As soon as it sees any character that's in IFS
, it thinks that marks the end of the file name, so it sets file
to whatever characters it saw until now and runs the loop.
[131094370110] |Then it starts where it left off to get the next file name, and runs the next loop, etc., until it reaches the end of output.
[131094370120] |So it's effectively doing this:
[131094370130] |To tell it to only split the input on newlines, you need to do
[131094370140] |before your for ... find
command.
[131094370150] |That sets IFS
to a single newline, so it only splits on newlines, and not spaces and tabs as well.
[131094370160] |If you are using sh
or dash
instead of bash
or zsh
, you need to write IFS=$'\n'
like this instead:
[131094370170] |2. Expanding $file
without wildcards
[131094370180] |In the bit where you do
[131094370190] |the shell tries to expand $file
(again!).
[131094370200] |It could contain spaces, but since we already set IFS
above, that won't be a problem here.
[131094370210] |But it could also contain wildcard characters such as *
or ?
, which would lead to unpredictable behavior.
[131094370220] |(Thanks to Gilles for pointing this out.)
[131094370230] |To tell the shell not to expand wildcard characters, put the variable inside double quotes, e.g. "$file"
.
[131094370240] |Finally, just for completeness, here's a version that also works if file names also contain newlines:
[131094370250] |NOTES
[131094370260] |I removed the semi-colons (;
) inside the loop.
[131094370270] |You can put them back if you want, but they are not needed.
[131094380010] |This script fails if any file name contains spaces or shell globbing characters \[?*
.
[131094380020] |The find
command outputs one file name per line.
[131094380030] |Then the command substitution `find …`
is evaluated by the shell as follows:
[131094380040] |Execute the find
command, grab its output.
[131094380050] |Split the find
output into separate words.
[131094380060] |Any whitespace character is a word separator.
[131094380070] |For each word, if it is a globbing pattern, expand it to the list of files it matches.
[131094380080] |For example, suppose there are three files in the current directory, called `foo* bar.csv
, foo 1.txt
and foo 2.txt
.
[131094380090] |The find
command returns ./foo* bar.csv
.
[131094380100] |The shell splits this string at the space, producing two words: ./foo*
and bar.csv
.
[131094380110] |Since ./foo*
contains a globbing metacharacter, it's expanded to the list of matching files: ./foo 1.txt
and ./foo 2.txt
.
[131094380120] |Therefore the for
loop is executed successively with ./foo 1.txt
, ./foo 2.txt
and bar.csv
.
[131094380130] |You can avoid most problems at this stage by toning down word splitting and turning off globbing.
[131094380140] |To tone down word splitting, set the IFS
variable to a single newline character; this way the output of find
will only be split at newlines and spaces will remain.
[131094380150] |To turn off globbing, run set -f
.
[131094380160] |Then this part of the code will work as long as no file name contains a newline character.
[131094380170] |(This isn't part of your problem, but I recommend using $(…)
over `…`
.
[131094380180] |They have the same meaning, but the backquote version has weird quoting rules.)
[131094380190] |There's another problem below: diff $file /some/other/path/$file
should be
[131094380200] |Otherwise, the value of $file
is split into words and the words are treated as glob patterns, like with the command substitutio above.
[131094380210] |If you must remember one thing about shell programming, remember this: always use double quotes around variable expansions ($foo
) and command substitutions ($(bar)
), unless you know you want to split.
[131094380220] |(Above, we knew we wanted to split the find
output into lines.)
[131094380230] |A reliable way of calling find
is telling it to run a command for each file it finds:
[131094380240] |In this case, another approach is to compare the two directories, though you have to explicitly exclude all the “boring” files.
[131094390010] |Afaik find has all you need.
[131094390020] |find takes itself care for calling the programs savely. -okdir will prompt you before the diff (are you sure yes/no).
[131094390030] |No shell involved, no globbing, jokers, pi, pa, po.
[131094390040] |As a sidenote: If you combine find with for/while/do/xargs, in most cases, you're doing it wrong. :)
[131094400010] |Loop through any files (any special character included) with the completely safe find (see the link for documentation):
[131094410010] |Full disk encryption with password-less authentication in Linux
[131094410020] |I have a fairly standard disk encryption setup in Debian 5.0.5: unencrypted /boot
partition, and encrypted sdaX_crypt
that contains all other partitions.
[131094410030] |Now, this is a headless server installation and I want to be able to boot it without a keyboard (right now I can boot it only with a keyboard and a monitor attached).
[131094410040] |So far I have an idea of moving /boot
partition to an USB drive and make slight modifications to auto-enter the key (I think there is just a call to askpass
in the boot script somewhere).
[131094410050] |This way I can boot headless, just need to have a flash drive in at boot time.
[131094410060] |As I see it, the problem with it is that
[131094410070] |I need to invest time into figuring out all bits and pieces to make it work,
[131094410080] |If there is an update, which regenerates initrd
, I need to regenerate the boot partition on the USB, which seems tedious.
[131094410090] |The question: is there a standard low-upkeep solution available for what I want to do?
[131094410100] |Or should I be looking elsewhere altogether?
[131094420010] |You can setup your system to require a key instead of a password and change some scripts to search for this key on a USB stick.
[131094420020] |I found a detailed explanation for this process on Debian Lenny.
[131094420030] |There are some notes in the end that describe necessary changes for newer versions of Debian.
[131094430010] |But then what is the point of having full disk encryption, if you're just leaving the keys laying around in plaintext?
[131094430020] |For that to work, you'd need something like what the Trusted Computing Platform was supposed to be before Microsoft and Big Media hijacked it for their own evil user-subduing purposes.
[131094430030] |The idea is have a chip holding the keys in the motherboard, and having it give the keys only when it's verified that the software running was properly signed by a trusted authority (you).
[131094430040] |This way you don't leave the keys in plain sight and you don't have to boot the server interactively.
[131094430050] |It's a pity I've never seen Trusted Computing put to any good use, which could actually be useful for the end user.
[131094440010] |How to test what shell I am using in a terminal?
[131094440020] |How to check what shell I am using in a terminal?
[131094440030] |What is the shell I am using in MacOS?
[131094450010] |Several ways, from most to least reliable (and most-to-least "heavy"):
[131094450020] |ps -p$$ -ocmd=
.
[131094450030] |(On Solaris, this may need to be fname
instead of cmd
.)
[131094450040] |Check for $BASH_VERSION
, $ZSH_VERSION
, and other shell-specific variables.
[131094450050] |Check $SHELL
; this is a last resort, as it specifies your default shell and not necessarily the current shell.
[131094460010] |get numeric ASCII value for a character
[131094460020] |I'm trying to write a shell script which asks for an ASCII character in the range A-Z or a-z and returns its equivalent numerical value.
[131094460030] |For example, the output might look like the following:
[131094460040] |My attempt:
[131094470010] |Try od -t d1
if you have it.
[131094470020] |The other output formats are quite weird.
[131094470030] |You also don't need head and cut, for example:
[131094480010] |Maybe:
[131094480020] |Cheers
[131094490010] |A od-less solution, just bash, and just lowercase, so far. z
is the character searched for, here s
as search. i=97
because ascii(a)=97.
[131094490020] |The rest is obvious.
[131094490030] |You may put it into a single line of course.
[131094490040] |Here are some semmicolons: ;;;;;
(should be enough)
[131094500010] |POSIX: printf a | od -A n -t d1
[131094500020] |Perl: perl -e 'print ord($ARGV[0])' a
[131094500030] |Perl, coping with UTF-8 if in a UTF-8 locale: perl -C255 -e 'print ord($ARGV[0])' œ
[131094520010] |If you just want a program that does this and you're not doing this as an exercise then there's a program called ascii
that does this.
[131094520020] |Your distro may offer it as a package already, if not get it from http://www.catb.org/~esr/ascii/.
[131094530010] |What graphics card is "best" in 3D on Linux?
[131094530020] |I've been wondering about this for a while and my usual resources don't seem to really know.
[131094530030] |The base question is; which 3D card performs the best on Linux (any distro)?
[131094530040] |But of course, the question is really two-fold;
[131094530050] |1) What's the best bet, performance-wise, if I insist on using strictly open-source drivers with no closed firmware or anything of the sort?
[131094530060] |2) If the best performance can only be acquired with closed-source drivers or/or closed firmware, what's the best bet?
[131094530070] |EDIT:
[131094530080] |I am flabbergasted by how difficult it is to get people to even tell me anything at all about this.
[131094530090] |I don't know how to ask this in a better way so I'm leaving it closed.
[131094540010] |1) There is no way to answer this question.
[131094540020] |Which card has the best "performance" depends on what metric you're using, which application you are using/profiling, and a variety of other factors.
[131094540030] |2) No, closed source drivers do not necessarily perform better than open drivers.
[131094540040] |Sometimes they do, sometime they don't.
[131094540050] |Your questions are too vague and general to answer.
[131094550010] |Where to search for more info if I dont have internet ?
[131094550020] |For example, I was reading setuid man page today.
[131094550030] |It says
[131094550040] |If the effective UID of the caller is root, the real UID and saved set-user-ID are also set.
[131094550050] |I don't know what set-user-ID is.
[131094550060] |How can I get more information about it if I don't have internet connection?
[131094550070] |One thing I can do is to open some books and search for it.
[131094550080] |What other places on my Linux system where I can search for more information?
[131094550090] |I hope I make my point clear.
[131094560010] |use apropos
[131094560020] |try apropos 'set user id'
for an example
[131094570010] |apropos is also spelled 'man -k'
[131094580010] |Some applications (mostly those of GNU origin) come with 'info' pages.
[131094580020] |These pages usually contain a more in-depth manual of the application, and lots of extra information which you can find very useful for learning.
[131094580030] |To see if the info documentation for an application is installed, just type 'info xxx' (if the manual is not installed it will load the manpage instead).
[131094580040] |In Ubuntu at least, info pages are in a separate foo-doc package, where foo is the name of the application (ie. gcc, make, etc).
[131094580050] |And yes, the browser is lame.
[131094590010] |Is there a way to set network proxy system-wide?
[131094590020] |If I want to have GNOME applications (as well as Firefox and Chrome) access the network through a proxy, I need only use gnome-network-properties
(a nice and simple GUI I must say).
[131094590030] |For other apps (e.g. APT, Transmission, XChat), I have to use their specific ways of doing it.
[131094590040] |Is there a way to avoid this, something I can turn on and off when in a network that requires a proxy (hostname:port)?
[131094600010] |I think pretty much all linux/unix software that uses networking will honor the http_proxy
and ftp_proxy
environment variables.
[131094600020] |Depending on how your distribution is set up, /etc/environment
will exist and be read by default by login shells.
[131094600030] |You can add a line saying
[131094600040] |http_proxy=123.45.67.89:1011
[131094600050] |in /etc/environment
easily enough, but changes in that file will only take hold the next time you start a shell process -- and only in that shell process.
[131094600060] |I don't think you can modify global variables in a shell other than your current one.
[131094610010] |There's no one-for-all solution.
[131094610020] |Each program uses its own way to connect to a proxy server.
[131094610030] |I have a similar problem at my office.
[131094610040] |I found that best way to do this is to use a VPN connection.
[131094610050] |When you connect, the whole system network traffic will go through the VPN.
[131094610060] |If you don't have access to a VPN server, you can run a new one manually, and connect it to the proxy server you've got.
[131094610070] |If you have an application that doesn't support proxy, you can use something like socksify, to wrap its network connection.
[131094620010] |I agree that the best way to use the proxy server is with the environment variable http_proxy
, as other answers have pointed out.
[131094620020] |Unfortunately, not all applications honour the http_proxy environment variable, which is why I still use tsocks for some programs to use LD_PRELOAD to force network connections to go through a SOCKS proxy.
[131094630010] |full disk encryption on OpenBSD
[131094630020] |Is it a good method to encrypt the full disk (like with dm-crypt/AES under Linux)?
[131094630030] |Can someone please post a small and compact howto?
[131094630040] |I started a bounty to search for every encryption solution in OBSD! (need compact howtos)
[131094630050] |I'm especially searching for encryption because, if someone steals my notebook, they could get the data stored on it.
[131094630060] |Another reason is: I'm not always next to my notebook so someone could install a "trojan" on it.
[131094630070] |These are the major 2 reasons why encryption is important on a notebook.
[131094630080] |Especially if someone is trying to use a Very Secure operating system, like OpenBSD!
[131094630090] |Thank you!
[131094640010] |I don't think the OpenBSD installer supports directly creating and installing to an encrypted partition.
[131094640020] |There's little use in encrypting the system partition anyway¹.
[131094640030] |So I suggest installing the system normally, then creating an encrypted filesystem image and putting your sensitive data (/home
, parts of /var
, perhaps a few files in /etc
) there.
[131094640040] |Boot into your OpenBSD installation and create a file that will contain the encrypted filesystem image.
[131094640050] |Make sure to choose a reasonable size since it'll be hard to change later (you can create an additional image, but you'll have to enter the passphrase separately for each image).
[131094640060] |The vnconfig
man page has examples (though they're missing a few steps).
[131094640070] |In a nutshell:
[131094640080] |Add corresponding entries to /etc/fstab
:
[131094640090] |Add commands to mount the encrypted volume and the filesystem in it at boot time to /etc/rc.local
:
[131094640100] |Check that everything is working correctly by running these commands (mount /dev/svnd0c &&mount /home
).
[131094640110] |Note that rc.local
is executed late in the boot process, so you can't put files used by the standard services such as ssh or sendmail on the encrypted volume.
[131094640120] |If you want to do that, put these commands in /etc/rc
instead, just after mount -a
.
[131094640130] |Then move the parts of your filesystem you consider to be sensitive and move them to the /home
volume.
[131094640140] |You should encrypt your swap as well, but OpenBSD does that automatically nowadays.
[131094640150] |Another way to get an encrypted filesystem is through the software raid driver softraid
.
[131094640160] |See the softraid
and bioctl
man pages or Lykle de Vries's OpenBSD encrypted NAS HOWTO for more information.
[131094640170] |Booting from a softraid volume is not supported.
[131094640180] |¹ As far as I can tell, OpenBSD's volume encryption is protected for confidentiality (with Blowfish), not for integrity.
[131094640190] |Protecting the OS's integrity is important, but there's no need for confidentiality.
[131094640200] |There are ways to protect the OS's integrity as well, but they are beyond the scope of this answer.
[131094650010] |http://16s.us/OpenBSD/softraid.txt
[131094660010] |A problem with find and grep
[131094660020] |I have dafined the following in .bashrc:
[131094660030] |in order to write
[131094660040] |and find all file that have extension .txt and contain " my_text " but it does not work.
[131094660050] |Why?
[131094670010] |grep
doesn't take file names on standard input, it searches through standard input, so you're better off with this:
[131094680010] |Aliases in bash do not take parameters (as already pointed out), so when you need something like that you can use bash functions (like the one provided by @l0b0).
[131094680020] |But what you are trying to achieve here, can be done in a better way by using only grep.
[131094680030] |BTW, fg
is a shell built in command, an important one.
[131094680040] |You should avoid using it as a name for aliases or functions.
[131094680050] |EDIT: in a function
[131094690010] |find ./ -name "$1" -exec grep -l "$2" {} \;
should do the trick.
[131094700010] |This works!
[131094710010] |How do I prevent gnome to mount my usb device when I'm using KDE?
[131094710020] |Hello, in KDE, when I plug a USB storage device, I see a nautilus window automatically open, and I'm obliged to use nautilus if I want to umount it properly Dolphin gives the following error:
[131094710030] |org.freedesktop.Hal.Device.Volume.NotMountedByHal: Device to unmount is not in /media/.hal-mtab so it is not mounted by HAL.
[131094710040] |How can I specify that when I use KDE, I want HAL to handle usb storage automatic mounting ?
[131094710050] |I don't know what gnome-mechanism automatically mounts usb devices, but I guess I'll have to disable it too?
[131094710060] |I'm using fedora 13.
[131094720010] |It seems that I have solved my problem.
[131094720020] |I tried to launch nautilus in konsole and it crashed with the same message I put in my comment.
[131094720030] |Then I launched nautilus using krunner and this time, it worked, and I could eject the usb device.
[131094720040] |Now the solution to the problem is to configure auto-mounting in KDE (I did it via the New device notification applet).
[131094720050] |Once auto mounting authorized in KDE, it seems to take priority over gnome, and you can handle devices from dolphin.
[131094730010] |Is it possible to transfer a running process to your terminal?
[131094730020] |Possible Duplicate: How can I pause up a running process over ssh, disown it, associate it to a new screen shell and unpause it?
[131094730030] |It is fairly easy to disown a process, or make it run without a ttl, but is it possible to transfer to process to your own ttl?
[131094730040] |note: by disown a process, i'm talking about running a command using nohup, or by running the disown builtin.
[131094740010] |Take a look at retty which finds the stdin, stdout and stderr of a process and attaches to them.
[131094740020] |However, you don't completely own the process like you did before so you can't send it signals.
[131094750010] |screen
is very handy for this.
[131094750020] |Launch screen, launch the process inside screen, detach from screen.
[131094750030] |Then, screen -DR
to resume.
[131094760010] |How does keyboard mapping work in Linux?
[131094760020] |I have always had trouble with understanding the way keyboard mapping and related things are put together in Linux.
[131094760030] |When things break, it makes my blood boil if I have to sift through endless outdated mailing list and forum posts to find THAT one command or inputrc line that fixes my problem.
[131094760040] |There are classic problems like backspace not working in vim, or Ctrl
+ arrows in bash until you switch terminal type.
[131094760050] |Or a problem I've encountered recently, where in fresh Debian install @
key actually prints "
, and "
prints @
(wrong keyboard layout?)
[131094760060] |Just looking at files and tools doesn't help too much. inputrc? xmodmap? setxkbmap? console-setup?
[131094760070] |Where do I get started to actually understand how it works so I don't have to resort to trying someone's dubious commands to fix my keyboard problems?
[131094770010] |This is much more complicated than it should be, but here's my stab at it.
[131094770020] |At the most basic level, the kernel knows how to recognize keyboard devices and it understands the concept of a console keymap.
[131094770030] |This is the simplest way to configure your keyboard, and there's only one variable to consider, but these settings only affect your keyboard input on the Linux text console.
[131094770040] |Once you get into Xorg, things get a bit more complicated, but it does actually make a kind of sense.
[131094770050] |Xorg has several specific notions which are very important to understand.
[131094770060] |Xorg talks directly to the keyboard device, bypassing the kernel and ignoring the keymap specified on the console.
[131094770070] |Xorg allows you to set some specific options for your keyboard in the xorg.conf file, although current Linux distributions try their best to run without any xorg.conf file at all.
[131094770080] |Depending on your version of Xorg, you may need to specify your XkbLayout inside of an InputClass
section (for newer versions of Xorg) or an InputDevice
section (for older releases).
[131094770090] |There are several specific layers of interpretation in Xorg when a key is pressed.
[131094770100] |At the most basic level, each keypress event is represented by a numeric keycode.
[131094770110] |The XkbModel combined with the XkbLayout are used to associate a given keycode with an actual key event.
[131094770120] |For example, on my keyboard, keycode 50 generates Shift_L
.
[131094770130] |You can optionally specify multiple XkbLayouts for a keyboard, and then use the XkbOptions
(full list usually stored in /usr/share/X11/xkb/rules
) to configure how to switch layouts.
[131094770140] |(This is useful if, say, you're trying to teach yourself Dvorak, or you want to alternate between English and Russian keyboard layouts).
[131094770150] |Each keycode can be interpreted in multiple ways, depending on which modifiers are being held. xmodmap
lets you see what key events will be sent based on which modifiers are being held.
[131094780010] |These days, the console and X can actually work together.
[131094780020] |I don't know if this completely standardized yet, but in Debian Unstable (and presumably other distros with up to date Xorg and console tools), /etc/default/keyboard
allows you to use X-style syntax to setup the keyboard in the console and in X - at least if you are using console-tools and console-setup.
[131094780030] |The rest is as clee described.
[131094790010] |Find absolute path from a script
[131094790020] |In a script I get in $0
the possible relative path to it.
[131094790030] |For converting it to absolute I've found this solution which I don't understand:
[131094790040] |My problem is the magic inside of ${0%/*}
and ${0##*/}
.
[131094790050] |It looks like the former extracts the dirname and the latter extracts the filename, I just don't get how.
[131094800010] |Definitions:
[131094800020] |${string%substring}
deletes shortest match of $substring
from the end of $string
.
[131094800030] |${string##substring}
deletes longest match of $substring
from the start of $string
.
[131094800040] |Your example:
[131094800050] |${0%/*}
deletes everything after the last slash, giving you the directory name of the script (which might be a relative path).
[131094800060] |${0##*/}
deletes everything upto the last slash, giving you just the name of the script.
[131094800070] |So, this command changes to the directory of the script and concatenates the current working directory (given by $PWD
) and the name of the script giving you the absolute path.
[131094800080] |To see what is going on try:
[131094810010] |Shawn had the simplest solution: readlink -f $0
.
[131094810020] |If you want to be absolutely sure to handle weird file names, you can use this:
[131094810030] |Documentation
[131094820010] |If you want to learn Shell Parameter Expansion, you can read it from here, but Expansion isn't always a good choice.
[131094820020] |In this case, almost every Unix like system has 2 good utils:
[131094820030] |The first will extract the filename, while the second will extract the path, so, if you have $0, say:
[131094820040] |And you'ill get the path.
[131094820050] |Cheers
[131094830010] |Can a user with /sbin/nologin shell login through ssh?
[131094830020] |Hi,
[131094830030] |No flaming please.
[131094830040] |I just wonder if it's possible to ssh to a machine as a user with no valid shell(say, 'mail').
[131094830050] |So far, I cannot and using the command below won't work:
[131094830060] |It will always say, "This account is currently not available."
[131094830070] |But if I'm logged-in to the machine and use 'su'
[131094830080] |I can become user 'mail'.
[131094830090] |Thanks a lot,
[131094830100] |Ismael Casimpan
[131094840010] |The command passed to ssh
is run by the shell on the server, so if no valid shell is specified then it is not possible to run it.
[131094850010] |setserial /dev/ttyS4: Cannot set serial info
[131094850020] |So, I have a board with 6 hardware serial ports: the first 2 on the ETX bus and the last 4 on the ISA bus.
[131094850030] |The following configuration is how it is supposed to be:
[131094850040] |On startup, I run:
[131094850050] |So, I'm trying to use setserial to configure ports ttyS2-ttyS5 with the correct values:
[131094850060] |Even taking out the uart option from the last command:
[131094850070] |What do I need to do to get ttyS4 and ttyS5 configured through setserial?
[131094860010] |A couple of things strike me about what I see in your /proc
and dmesg
output:
[131094860020] |You shouldn't try to share an IRQ between devices.
[131094860030] |It may work, but the intention with ISA is that each device on the bus that needs an interrupt line to work gets its own IRQ.
[131094860040] |If your serial port cards don't give you enough IRQ options, you may simply not be able to use them all together in that PC.
[131094860050] |The I/O addresses you are using for the second pair of serial ports are nonstandard. ttyS2
is normally at 0x3E8 and ttyS3
is normally at 0x2E8. I would move those if you have that option with the serial card.
[131094860060] |(There are no standard I/O addresses or IRQs for ttyS4
and up.)
[131094860070] |Aside from all that, if I needed 6 serial ports on a Linux box, I wouldn't try to use plain old serial port adapter cards.
[131094860080] |I would use something like a Digi AccelePort.
[131094860090] |They still offer one that will work in your ISA slots, the Xe model.
[131094860100] |If you need cheap, you should be able to find one floating around on the used market; they were very popular back in the day.
[131094870010] |Try adding 8250.nr_uarts=6
or nr_uarts=6
in your kernel boot parameters.
[131094870020] |Edit: Some info that might help (hopefully).
[131094870030] |http://www.linux.com/learn/docs/ldp/743-Serial-HOWTO#ss16.3
[131094870040] |http://cateee.net/lkddb/web-lkddb/SERIAL_8250_NR_UARTS.html
[131094870050] |http://cateee.net/lkddb/web-lkddb/SERIAL_8250_RUNTIME_UARTS.html
[131094880010] |Try to use the baud_rate 115200
parameter for setserial
[131094890010] |I am failing to clone a git repo when behind a proxy
[131094890020] |When I run git clone git://git.gnome.org/tracker
, I get:
[131094890030] |This doesn't happen when I'm not behind a network proxy I'm currently at.
[131094900010] |Use the http version of git.gnome.org repo and set http_proxy environment variable
[131094900020] |you might also need to add the proxy to git config
[131094910010] |See git-config details, you can set proxies for HTTP or GIT protocols.
[131094920010] |Terminology note: the firewall is what blocks you from connecting to some sites or ports directly.
[131094920020] |The proxy is an intermediate server that you can connect to (but not for everything) and that is allowed to access the Internet.
[131094920030] |If your proxy isn't trying too hard to block non-web traffic, you may be able to get it to relay your git connection.
[131094920040] |Use a program like corkscrew or connect-proxy to use the CONNECT
method to try and get through the proxy.
[131094920050] |Put something like this in your ~/.git/config
(replace proxy.example.com
and 3128 by your proxy's host name and port):
[131094920060] |Many proxies are configured to allow CONNECT
only to port 443 (https), and they may check that the traffic they're relaying is actually SSL.
[131094920070] |If that's the case for you, as far as I know, your only options are to use a different protocol, use an external relay that you can reach (e.g. ssh tunnel with a server on port 443), or get your network administrator to allow git traffic.
[131094930010] |How do I choose a graphics card for Linux?
[131094930020] |I'm building or buying a new Linux system, and I'm trying to select the best graphics card for my needs.
[131094930030] |How do I go about making this decision?
[131094930040] |There's dozens of computer-gear review sites which drool over every detail of new graphics hardware and perform detailed benchmarks and pros and cons — for Microsoft Windows.
[131094930050] |Are these ever useful sources of information for Linux too?
[131094930060] |Does any site at least give Linux a cursory look?
[131094930070] |I'm primarily interested in good 2D performance, but with fancy new desktop environments now requiring hardware-accelerated 3D, I need to consider that too.
[131094930080] |Where can I find pre-purchase information on that?
[131094930090] |I strongly prefer having an open source driver.
[131094930100] |How do I judge which open source drivers are the best in terms of features support and performance, without joining a dozen different mailing lists?
[131094930110] |Are specific companies almost always the best bet, or does it change?
[131094930120] |What are the advantages and drawbacks of a closed-source driver?
[131094930130] |Is this mostly about 3D performance, or are there other features enabled by proprietary drivers that I might miss out on?
[131094930140] |Since a closed-source driver will mark the Linux kernel as tainted, are the closed-source companies good at providing direct end-user support for related problems?
[131094930150] |Is the state-of-the-art finally such that I can choose between open or closed for any given graphics card, or do some models require one or the other?
[131094930160] |It'd be great if the card just worked hassle-free with whatever modern Linux distribution I choose, with no need to go through a long how-to process.
[131094930170] |Is this a reasonable hope, and how can I best find a card that'll work that way?
[131094930180] |How do I find if a specific graphics driver fits a given model on the market?
[131094930190] |Is it best to buy older cards in order to insure that support is available?
[131094940010] |Check out the following lists of linux friendly graphics cards/chipsets, both open and proprietary:
[131094940020] |http://www.phoronix.com/scan.php?page=category&item=Graphics%20Cards (provides benchmarks and reviews and all, pretty cool)
[131094940030] |http://www.tldp.org/HOWTO/Hardware-HOWTO/video.html
[131094940040] |http://hardware4linux.info/search/
[131094940050] |http://xorg.freedesktop.org/wiki/Projects/Drivers?action=show&redirect=VideoDrivers
[131094940060] |On a personal note, i would choose a NVIDIA graphics card.
[131094940070] |Their proprietary linux drivers are really good and frequently updated.
[131094940080] |They even release driver versions for FreeBSD and Solaris.
[131094940090] |To my knowledge there's no match out there(neither proprietary nor free) and I didn't have any real issues with direct rendering and 3D pertaining to NVIDIA cards since GeForce series got out.
[131094950010] |I can tell you what I do:
[131094950020] |Check if the chip is supported and/or if the manufacturer supplies drivers for the card.
[131094950030] |For instance, I have an Nvidia which on Linux, is no problem.
[131094950040] |I can choose from a variety of drivers. and it works well that way.
[131094950050] |Nvidia was never a problem on Linux, most distros have the drivers in some repo, (on Fedora, that's in fedora-fusion).
[131094950060] |Those are closed-source drivers, but it's been working well for years.
[131094950070] |I remember making the kernelmod on my computer directly from the Nvidia resources, and that was six years ago.
[131094950080] |Don't be afraid do invest in a new card.
[131094950090] |Support for new cards picks up pretty quick, and since it basically all depends on the chip, it's the chip that needs to be supported.
[131094950100] |Newer cards usually have the same chip designs, but with improved performance and power efficiency.
[131094950110] |2D and 3D performance are more or less merging into one another.
[131094950120] |Compositing desktops for instance, need 3D acceleration to work properly.
[131094950130] |Another interesting aspect is, how closed-source is a closed-source driver.
[131094950140] |The Nvidia drivers are closed-source, but on the other hand, the developers keep a good contact with their userbase and Linux developers.
[131094950150] |So, the source is not open to anyone, but it is very likely, that you can have influence on the development of those drivers.
[131094950160] |Developing those drivers is no trivial matter, Xorg tried it, but they sort of failed and most people rely on closed drivers up until now.
[131094950170] |As long as the card manufacturer supplies free and good working drivers for Linux, I don't see why they shouldn't be used.
[131094950180] |To get information whether your card is supported or not, I wouldn't look too much in mailing lists, but ask the manufacturer directly.
[131094950190] |Keep in mind: The Linux users community isn't that small anymore, and especially in academia and research, Linux is usually the standard.
[131094950200] |So, manufacturers have to respond to that user sector as well.
[131094950210] |But as I said above: It's not the support for the card you're looking for, it's the support for the chip on it.
[131094950220] |When it comes to benchmarking, Data from Windows can be used, as long it uses the same acceleration toolkit (if any) (i.e. OpenGL).
[131094950230] |Benchmarks done with DirecX, can't be reproduced on Linux, obviously.
[131094950240] |Anyway, this is how I've been deciding which graphics card to get for my Linux computer.
[131094960010] |I suggest you buy a main-stream nvidia card for you linux, and find a driver on nvidia's official page.
[131094960020] |The driver installer will guide you to install itself.
[131094960030] |Depend on the linux distribution you are using, the install procedure may different, but generally you can find a 'HOW-TO' on the distribution's forum.
[131094960040] |You needn't buy a old card.
[131094970010] |Open source drivers are getting pretty good these days.
[131094970020] |I haven't had any problem with Intel or AMD hardare.
[131094970030] |Intel I hear the old ones are pretty bad, but my G4500HD does everything I need well.
[131094970040] |Video acceleration could be better though.
[131094970050] |There isn't a proprietary driver for Intel either, your only choice is open source.
[131094970060] |The composited 3D desktop in KDE works great on my laptop which has an Intel chip.
[131094970070] |AMD/ATi Right now the older cards are better supported than the new ones.
[131094970080] |If you could somehow get an x1800 or something from the same generation that would probably be the best.
[131094970090] |The r300g
driver is getting more development work than r600g
.
[131094970100] |That's not to say r600g
is bad, in fact it's great!
[131094970110] |It's just somewhat behind the driver for the older hardware.
[131094970120] |AMD has a proprietary driver for the new hardware, but in my experience you want to avoid it; it's pretty bad.
[131094970130] |The hardware covered by r300g
isn't supported by that driver, so the open driver is your only option there.
[131094970140] |And like the Intel chip I have, my Radeon 4850 runs the composited desktop in KDE well.
[131094970150] |At the moment, I wouldn't recommend an HD6000 series.
[131094970160] |The 6900s have no support at all in the open driver, and the others have basic support.
[131094970170] |Go for an HD5000 or an HD4000.
[131094970180] |Nvidia They have a really good proprietary driver, but the open driver is struggling along.
[131094970190] |It's getting better all the time, but Nvidia is doing nothing to help the developers.
[131094970200] |At least AMD helps out a little bit for their hardware.
[131094970210] |The advantage to having an open driver is that it will work out of the box in any distro.
[131094970220] |If you install Fedora, everything will work including dual screen and 3D.
[131094970230] |The proprietary ones are painful to setup.
[131094970240] |Neither of them properly set up my dual screens.
[131094970250] |It was easier to setup with Nvidia which isn't saying much because the AMD blob was just awful at this.
[131094970260] |Also, anytime you update the kernel, you have to reinstall the driver.
[131094970270] |Most distros take care of this if you install the in-repo version, but if you don't it's annoying to boot up one morning and realize you updated the kernel and now X.org doesn't work.
[131094970280] |If you aren't planning on playing 3D games, either the Intel or AMD drivers are the best.
[131094970290] |The AMD driver is more modern than the Intel one, it uses the Gallium3D architecture within Mesa (thats what the g
stands for in r600g
), but they both get the job done.
[131094980010] |The choice depends on your goals.
[131094980020] |Intel has the best open source driver.
[131094980030] |They put efforts into it themselves.
[131094980040] |Intel graphic solutions are not the best 3D performers, though, being embedded-only.
[131094980050] |NVidia has the best proprietary driver with great 3D performance, and they offer both high-end 3D hardware and embedded solutions.
[131094980060] |Keeping it up to date takes a bit of attention at every kernel upgrade, even minor.
[131094980070] |This is not painful, from my experience — just rebuild and reinstall.
[131094980080] |Open-source drivers (nouveau) are improving and work well with 2D, but lag behind in 3D yet.
[131094980090] |AMD/ATI have great hardware, but their drivers are a notch below both Intel's and NVidia's, either open or closed source.
[131094980100] |You have to better stick to older well-supported cards, and people keep complaining about minor glitches.
[131094980110] |Their open-source driver develops quickly, though, and maybe in a year will become a worthy contender in 3D space.
[131094990010] |Although this post is based on facts, it still contains my personal experience and opinions.
[131094990020] |Nvidia
[131094990030] |Although there is a project for OpenSource drivers, you probably need to consider Nvidia being closed source drivers only.
[131094990040] |Now in case of Nvidia this doesn't really bring a lot of bad things since they really work on their drivers very hard.
[131094990050] |The best support when it comes to closed source graphic card drivers on Linux.
[131094990060] |Nvidia graphic cards are the only ones that provide equivalent performance on Linux and Windows.
[131094990070] |Still, the closed source drivers imply some limitations like no support for features available only to GPL drivers (like KMS).
[131094990080] |Intel
[131094990090] |Now when choosing Intel you need to be extremely careful.
[131094990100] |Some of the Intel graphic cards are actually 3rd party bundled cards that don't have any (or have very crappy) support.
[131094990110] |But if you choose the correct chip, you can enjoy the best opensource drivers out there.
[131094990120] |For example even very low end Intel cards can be faster in compositing window managers then high end Nvidia cards.
[131094990130] |AMD
[131094990140] |Now this is complex.
[131094990150] |AMD provides both proprietary drivers (that tend to suck a lot) and they also release documentation and support opensource drivers development.
[131094990160] |Now the problem is that the opensource drivers will never contain certain licensed/patented/etc... features and since they don't really concentrate on the closed source drivers development I guess they will always be behind (Windows features/performance).
[131095000010] |I am failing to use XChat when behind a proxy
[131095000020] |So I set my HTTP proxy via Settings -> Preferences -> Network setup, and am getting this error:
[131095000030] |(
and
are substitutions)
[131095000040] |Without that setting (the IP:PORT is left empty), I'm getting this error:
[131095010010] |Is there a way to find out which program is segfault-ing?
[131095010020] |I have a Busybox/Linux system where a mystery program is segfaulting rarely.
[131095010030] |Is there a way to find which program is doing this?
[131095020010] |If the segmentation fault produces a "core" file, you can run file
to identify the executable.
[131095020020] |You can also use ddd
or gdb
to debug the core file for more information.
[131095030010] |Uh, how do you know about the segfault anyway?
[131095030020] |There is a kernel log message at priority info.
[131095030030] |It shows the executable name without the directory part.
[131095030040] |On some architectures, the debug.exception-trace
sysctl must be set.
[131095030050] |Some architectures require a compile-time option and kernel command line parameter (e.g. CONFIG_USER_DEBUG
and user_debug
on arm).
[131095040010] |Is there a Debian security APT repository that allows access via ftp?
[131095040020] |If I want security updates for Debian, I add the following line to /etc/apt/sources.list:
[131095040030] |Is there a repository that allows the http
to be ftp
?
[131095040040] |I say this because other normal Debian APT mirrors allows that, and doing the same here doesn't work.
[131095040050] |That is, using the following line results in errors:
[131095040060] |One advantage of ftp is that it's often (or rather, in my limited experience) not put behind a proxy.
[131095050010] |I've a debian on a VirtualBox here, at work, behind an enterprise proxy, and HTTP works fine.
[131095050020] |I don't know if security.debian.org allows FTP, but i don't really trust some other source for my security udpates.
[131095050030] |JUST FYI: I've configured apt to use the proxy.
[131095050040] |EDIT: Home sweet home, no proxies here so I can test ftp:
[131095050050] |So, security.debian.org allows ftp anonymous login.
[131095060010] |Yes, it's cunningly called security.debian.org
.
[131095060020] |Add this line to your /etc/apt/sources.list
:
[131095060030] |There are no official mirrors for security.debian.org
by design, see Why is there a separate package repository for Debian security updates?.
[131095060040] |Many places that have a mandatory HTTP proxy block FTP altogether.
[131095060050] |So there aren't many situations where there is an advantage to using FTP.
[131095070010] |Where can I find information on Linux device driver parameters?
[131095070020] |I want to work on Linux module programming (Device Drivers).
[131095070030] |And as a my college project I have to do profiling or benchmarking of a kernel modules based on various parameters.
[131095070040] |Everywhere I looked, I found info on how to write device drivers, whereas I want info on what parameters affect the performance of any specific driver/module.
[131095070050] |I also want to know how can I change them to test their various values.
[131095080010] |If you are looking for the driver-specific parameters, there's several places you can find information.
[131095080020] |Hopefully there is some information in the kernel documentation.
[131095080030] |This is great for some stuff, sparse for others, and totally not there for most drivers.
[131095080040] |Run modinfo
(maybe with -F param
or -F parm
), which hopefully will give helpful one-line descriptions of parameters.
[131095080050] |Find the source code for the driver and look for helpful inline documentation.
[131095080060] |Search for a possible web site organizing development of that driver (for example alsa project for sound).
[131095080070] |And, of course, where things are vague, there's always empirical testing — try it and see.
[131095090010] |Avoiding pasting with the mouse wheel
[131095090020] |I'm tired of having to copy and paste text using the the wheel of my mouse; the amount of dexterity needed is legendary.
[131095090030] |Anybody know of any kind of frankenmouse which has four buttons (one that simulates the third button right below the wheel would be nice)?
[131095090040] |Or have any other solution?
[131095090050] |Edit: A nice solution could be some integration of the two cut and paste schemes (Ctrl+c, Ctrl+x, Ctrl+v) and the mouse highlight and paste with wheel, so I could actually use the Ctrl keys while in a terminal.
[131095100010] |I'm with you on the wheel-mice/coordination issue.
[131095100020] |I'm not so sure about a hybrid cut-n-paste scheme.
[131095100030] |I have this in my .Xresources file:
[131095100040] |It makes the F2 key into a keyboard-paste in xterm windows.
[131095100050] |I find this helpful as I'm a vim user, and it may give you some taste of what a hybrid cut-n-paste scheme might feel like.
[131095100060] |Additionally, it makes "words" in xterm text include '/', '.' and '*' so that you can copy a filename with a single double-click.
[131095110010] |I use a Logitech medium-of-the-line model.
[131095110020] |It's not too big but has lots off features.
[131095110030] |I also found click-with-mousewheel to be difficult, but I found a solution I'm happy with.
[131095110040] |Most current Logitech models define pushing the wheel to tilt slightly the left or right as a mouse click (button 7 and 8, in fact).
[131095110050] |This actually makes clicking straight down even more annoying, since it tends to be a bit wobbly.
[131095110060] |So, my solution: I use an .Xmodmap
line to remap the wheel-to-the-right button as the middle button:
[131095110070] |which works very well because it's really easy and natural to shift one's index finger from the left button slightly over to press to middle-click.
[131095110080] |This is much, much nicer than trying to click the wheel down without moving it.
[131095110090] |I actually now prefer this to a real third button.
[131095110100] |And if I had a use for a fourth button, the opposite motion of tilt-to-the-left is available too.
[131095120010] |My cheap Logitech MX has 7 buttons plus scrollwheel, wheel push, wheel left and wheel right.
[131095120020] |It is very easy to use, and the 3 thumb buttons are ideal for copy and pasting and similar functions.
[131095130010] |I have a Lenovo scrollpoint mouse.
[131095130020] |It has the silly rubber stick thing in place of a scrollwheel, but more importantly it has an actual physical button rather than clicking the wheel.
[131095140010] |I believe you are looking for this: http://www.keyboardco.com/keyboard_details.asp?PRODUCT=572
[131095140020] |this is lefty version, but righty is also available (I prefer lefty, because I am righty).
[131095150010] |Automatically mount network drive when available.
[131095150020] |I have a NAT drive, that is accessible via SMB.
[131095150030] |I have an entry in fstab to mount it at boot.
[131095150040] |But if the drive is not switched on it is not mounted during boot and I have to manually mount later.
[131095150050] |(The NAT boots a lot slower than my PC and usually is not up at the point PC tries to mount it).
[131095150060] |Is there a way to automatically mount the drive when it becomes available in the network?
[131095150070] |Preferably if it can be done via command line tools, not involving GUI programs.
[131095150080] |Using PCLinuxOS with KDE.
[131095160010] |What you're looking for is AutoFS.
[131095160020] |Install the RPM, then make sure that it's running at start(RH/etc: chkconfig autofs on
).
[131095160030] |Edit the file /etc/auto.master and add the following line: /media/ /etc/auto.media
.
[131095160040] |If I were you, I would change "media" in both places to be the name of your root-level directory.
[131095160050] |Then edit the file /etc/auto.media and add finaldirname mount-options \\192.168.1.3\sharename
[131095160060] |A quick google gives this page.
[131095170010] |alias or bash function does not work
[131095170020] |When I create
[131095170030] |or
[131095170040] |in my .bashrc file, I get errors.
[131095170050] |Interestingly, if I source my .bashrc file with the function, it 'compiles', but when executing, gives me:
[131095170060] |Can someone help me with this, and also answer when its better to put something in a function versus in an alias?
[131095180010] |The problem with the alias is that quotes don't nest directly (except, as a special case, inside $()
).
[131095180020] |You need to escape the inner ones.
[131095180030] |You've removed too much of the function form for me to be certain, but the error snippet is from awk
and suggests quoting or shell variable expansion problems.
[131095180040] |As a general rule, functions are more flexible than aliases (you have no control over argument processing with aliases aside from history expansion), but aliases are a little faster, and if you end the alias with a space then the first argument gets expanded as a command (tab completion, etc.) Aliases also don't work in the file they're declared in (to avoid infinite alias expansion loops).
[131095190010] |I dont think you can pass an argument to an alias.
[131095190020] |An alias is just a string replacement rule for the first word of a command.
[131095190030] |Example:
[131095190040] |will result in the command wd arg1 arg2 arg3
being replaced and executed as
[131095190050] |For everything beyond that, use functions.
[131095200010] |Why the alias doesn't work
[131095200020] |The alias
command receives three arguments.
[131095200030] |The first is the string wd=ps -ef | grep java | awk {print
(the single quotes prevent the characters between them from having a special meaning).
[131095200040] |The second argument consists of a single space character.
[131095200050] |(In .bashrc
, the positional parameters $2
and $9
are empty, so $2
expands to a list of 0 words.)
[131095200060] |The third argument is } | egrep "(A|B|C|D)"
(again the single quotes protect the special characters).
[131095200070] |The alias definition is parsed like any other shell command when it is encountered.
[131095200080] |Then the string defined for the alias is parsed when the alias is expanded.
[131095200090] |Here are some possible ways to define this alias.
[131095200100] |First possibility: since the whole alias definition is within single quotes, only use double quotes in the commands, which means you must protect the "
and $
meant for awk with backslashes.
[131095200110] |Second possibility: every character stands for itself within single quotes, except that a single quote ends the literal string. '\''
is an idiom for “single quote inside a single-quoted string”: end the single-quoted string, put a literal single quote, and immediately start a new single-quoted string.
[131095200120] |Since there's no intervening space, it's still the same word.
[131095200130] |You can simplify this a bit:
[131095200140] |Tip: use set -x
to see how the shell is expanding your commands.
[131095200150] |Why the function doesn't work
[131095200160] |I don't know.
[131095200170] |The part you show looks ok.
[131095200180] |If you still don't understand why your function isn't working after my explanations, copy-paste your code.
[131095200190] |Alias or function?
[131095200200] |Use an alias only for very simple things, typically to give a shorter name to a frequently-used command or provide default options.
[131095200210] |Examples:
[131095200220] |For anything more complicated, use functions.
[131095200230] |What you should have written
[131095200240] |Instead of parsing the ps
output, make it generate output that suits you.
[131095210010] |Are changes in crontab applied when the file is saved, or when the editor is closed?
[131095210020] |When I do crontab -e
, are the changes applied immediately when I save the file, or do I have to exit vim for it to be applied?
[131095220010] |It waits until you exit the editor.
[131095220020] |From the manpage:
[131095220030] |The -e option is used to edit the current crontab using the editor specified by the VISUAL or EDITOR environment variables.
[131095220040] |After you exit from the editor, the modified crontab will be installed automatically.
[131095220050] |You can also tell by just watching stdout; it waits until you exit the editor and then outputs:
[131095220060] |crontab: installing new crontab
[131095230010] |What is the difference between symbolic and hard links?
[131095230020] |When would you use one over the other?
[131095240010] |Hard links are just references to the same disk spaces, thath the 'why' you cannot hardlink something in other filesystem.
[131095240020] |Symlinks are files linking other files (as Windows shortcuts), maybe in the same filesystem, maybe not.
[131095240030] |EDIT: I will explain something more.
[131095240040] |Every file that exists has a minimum of 1 hard link.
[131095240050] |Hard links are the way to access the content of an inode of the filesystem.
[131095240060] |You can obtain the inode number of a file with ls -i
, and get the number of hardlinks with stat
as follows in this example:
[131095240070] |Thanks @geekosaur for this reference:
[131095240080] |The kernel has to restart pathname-to-inode translation (traversing the directory tree) to expand symlinks, whereas hard links all use the same inode.
[131095240090] |(You'll often see this referred to as namei, from the name of the kernel function that did this in traditional Unix.)
[131095240100] |and this (edited):
[131095240110] |Hard links are very useful for disk-based incremental backup mechanisms like Apple's Time Machine, because you can have a full directory tree for each backup while sharing the space for files that haven't changed — and the filesystem keeps track of reference counting, so when the last reference to a given version goes away because the backup was expired/removed for space reasons, the space it used is automatically reclaimed.
[131095240120] |Some mail clients also use it for messages filed to multiple folders, for the same reason.
[131095240130] |Cheers
[131095250010] |The point of both types of links is to provide a way to make a file appear in two locations at the same time.
[131095250020] |This has a lot of uses.
[131095250030] |9 times out of 10 you want to use symbolic links.
[131095250040] |Symbolic links, or "symlinks" work a little like Windows shortcuts.
[131095250050] |The contents of a symlink are a pointer to the real location of the file/directory.
[131095250060] |If you delete the real file, the symlink will become "dangling," and won't work.
[131095250070] |Deleting the symlink does not delete the real file.
[131095250080] |You can have as many symlinks to a single file (or even other symlinks) as you like.
[131095250090] |Unlike Windows though, they work on the filesystem level, not shell or application level, so pretty much any application will "follow" symlinks as expected. ls -al
can be used as a quick way to see where symlinks "point" to.
[131095250100] |Hardlinks work even on a lower level.
[131095250110] |A hardlink is an actual, physical on-the-filesystem-level directory entry of the file.
[131095250120] |Technically, a directory entry is a hardlink, thus each file has at least one hardlink in a directory somewhere.
[131095250130] |Hardlinks are not separate from the file they point to; if a file has multiple hardlinks in different directories, deleting the hardlink with utilities like rm
won't truly delete the file, until all hardlinks are gone.
[131095250140] |I can't think of situation where use of hardlinks is common, or even needed, unless you intentionally want to prevent the files from getting deleted or are doing some weird low-level work with partitions or other filesystem related things.
[131095250150] |EDIT: There's great ideas in the other answers to this question, though!
[131095260010] |Hard links are very useful for disk-based backup mechanisms, because you can have a full directory tree for each backup while sharing the space for files that haven't changed — and the filesystem keeps track of reference counting, so when the last reference to a given version goes away because the backup was expired/removed for space reasons, the space it used is automatically reclaimed.
[131095260020] |Some mail clients also use it for messages filed to multiple folders, for the same reason.
[131095270010] |Why is this python error message generated whenever I type a nonsense command?
[131095270020] |Whenever I type any "nonsense" command, this python error message is generated.
[131095270030] |Normal commands work fine.
[131095270040] |Any idea how to debug this?
[131095270050] |EDIT - after fixing my /usr/bin/python, I now get this different python error message:
[131095270060] |Somehow, python is being run whenever I mistype a command.
[131095280010] |Ok, that makes things a bit clearer. command-not-found
is a python program, which runs when your command is not something found on the system.
[131095280020] |(Its function is to suggest alternatives and corrections in case of mistyping etc.)
[131095280030] |See /usr/bin/command-not-found
.
[131095280040] |It is trying to import the CommandNotFound
module and is unable to, clearly pointing to a screwed up python installation.
[131095280050] |I'm not that familar with command-not-found
, but I think fixing your Python installation will make the problem go away.
[131095280060] |Just to elaborate a bit, what is probably happening is that the command-not-found
module is located somewhere where your default python isn't looking for it.
[131095280070] |A path problem, basically.
[131095280080] |Debug suggestions:
[131095280090] |1) To start with, what is the output from
[131095280100] |and what does package/installation does that file belong to?
[131095280110] |2) What is the output for your installation corresponding to the code below?
[131095280120] |The path here is this python's import path.
[131095290010] |In bash, what is the safest way to pass variables to another program -- $* or $@?
[131095290020] |Can someone explain to me in a concise way what the difference between these two vars is?
[131095290030] |What is the safest way to pass variables to another program?
[131095290040] |E.g.
[131095300010] |safest is "$@"
because it passes the arglist on without expansion and separates each argument.
[131095300020] |See http://www.faqs.org/docs/abs/HTML/variables2.html#ARGLIST for more detail.
[131095310010] |Using this script:
[131095310020] |Try this demo:
[131095310030] |You should usually use "$@"
(with the quotes).
[131095310040] |Sometimes it's useful to change IFS
and use "$*"
:
[131095320010] |Use: "$@"
if you want to represent the original arguments (including no arguments) accurately.
[131095320020] |There are 5 notations to mention:
[131095320030] |$@
[131095320040] |$*
[131095320050] |"$@"
[131095320060] |"$*"
[131095320070] |${1+"$@"}
[131095320080] |The first two are equivalent; both split up the words in the argument list and pass them on to the command being invoked.
[131095320090] |They are seldom the correct choice.
[131095320100] |The second two are radically different.
[131095320110] |The "$@"
notation effectively copies the arguments to the shell (or the arguments set via the set
command) to the command being invoked, preserving the number of arguments, the spaces within them, and the split between them.
[131095320120] |If there are zero arguments, it generates nothing.
[131095320130] |The "$*"
notation pastes all the arguments into a single string, putting a space between arguments but preserving the spaces within the arguments.
[131095320140] |It is appropriate for use in contexts such as:
[131095320150] |where the utmost fidelity to the original command line is not crucial.
[131095320160] |The final notation, ${1+"$@"}
, is a hangover from days past that you may come across in old shell scripts.
[131095320170] |Once upon a time (and a long time ago - like 30 years ago), the Bourne shell treated "$@"
as a single empty argument when there were no arguments.
[131095320180] |That has not been the behaviour for a long time, but some scripts written a very long time ago might still protect themselves by using this notation.
[131095320190] |The ${1+X}
notation means 'if $1
is set to anything, even an empty string, then provide the value X
.
[131095320200] |In the notation under discussion, the X
is "$@"
, which is a list of arguments when there is known to be at least one - and otherwise no arguments.
[131095320210] |Which means the same as "$@"
in any modern shell.
[131095330010] |Can splice()/tee() be made to work with sockets, and not merely pipes?
[131095330020] |On Linux, the system calls splice(2) and tee(2) are used to transfer data between two file descriptors.
[131095330030] |One or both of the file descriptors must be a pipe, according to documentation.
[131095330040] |Why?
[131095330050] |Can this be made to work with sockets?
[131095330060] |Does it work currently, or would kernel modifications be required?
[131095340010] |Right now they just splice pipe buffers together; socket buffers (on Linux) are different animals, and drag kernel networking into it.
[131095340020] |Limiting it to AF_LOCAL
sockets would simplify implementation at the cost of complicating validation — and you still probably have to deal with them being allocated and mapped via the network buffer management subsystem instead of from anonymous file buffers.
[131095350010] |fedora 14 problem with using root
[131095350020] |Possible Duplicate: Root password not working to log in to Gnome with gdm
[131095350030] |Last time I switched to fedora and now I got problem.
[131095350040] |I know root password, but I can't login (says that login is incorrect).
[131095350050] |I can't use sudo because I'm not sudoer.
[131095350060] |However I can still install things and upgrade using my root's pass. What's wrong ?
[131095360010] |giving grep output to rm
[131095360020] |I'm trying to pipe grep
output to rm
, but it outputs useless stuff.
[131095360030] |Is any switch required for rm
?
[131095360040] |Or can rm
can be provided a regexp directly?
[131095360050] |rm: missing operand Try `rm --help' for more information.
[131095370010] |You need to use xargs
to turn standard input into arguments for rm
.
[131095370020] |(Beware of special characters in filenames; with GNU grep, you might prefer
[131095370030] |)
[131095370040] |Also, while the shell doesn't use regexps, that's a simple pattern:
[131095370050] |(meanwhile, I think I need more sleep.)
[131095380010] |Do not parse the output of ls
.
[131095380020] |Here, it's very simple to get the shell to filter the files you want.
[131095380030] |Note that it's the shell that's expanding the pattern Dar*
, not the rm
command.
[131095380040] |The pattern expansion performed by the shell is called globbing.
[131095380050] |In more complex cases, look up the find
command.
[131095390010] |Unhighlight Last Search in 'less'
[131095390020] |When you use a / forward search or a ? backward search in less, all instances of the file get highlighted.
[131095390030] |After I've found the instance of the word I'm looking for, what is the most correct way to unhighlight something?
[131095390040] |Currently I just press /
then mash gibberish into the input field.
[131095390050] |No results = no highlights!
[131095390060] |I'm looking for something akin to vim's :nohl
feature, in less.
[131095400010] |From man less
:
[131095400020] |ESC-u
[131095400030] |Undo search highlighting.
[131095400040] |Turn off highlighting of strings matching the current search pattern.
[131095400050] |If highlighting is already off because of a previous ESC-u command, turn highlighting back on.
[131095400060] |Any search command will also turn highlighting back on.
[131095400070] |(Highlighting can also be disabled by toggling the -G option; in that case search commands do not turn highlighting back on.)
[131095410010] |How can I detect if the shell is controlled from SSH?
[131095410020] |I want to detect from a shell script (more specifically .zshrc) if it is controlled through SSH.
[131095410030] |I tried the HOST variable but it's always the name of the computer which is running the shell.
[131095410040] |Can I access the hostname where the SSH session is coming from?
[131095410050] |Comparing the two would solve my problem.
[131095410060] |Every time I log in there is a message stating the last login time and host:
[131095410070] |This means the server has this information.
[131095420010] |You should be able to check via the SSH_TTY
, SSH_CONNECTION
, or SSH_CLIENT
variables.
[131095430010] |Here are the criteria I use in my ~/.profile
:
[131095430020] |If one of the variables SSH_CLIENT
or SSH_TTY
is defined, it's an ssh session.
[131095430030] |If the login shell's parent process name is sshd
, it's an ssh session.
[131095430040] |(Why would you want to test this in your shell configuration rather than your session startup?)
[131095440010] |I think Gilles and Cakemox's answers are good, but just for completeness...
[131095440020] |comes from pam_lastlog
.
[131095440030] |You can print pam_lastlog
information using the lastlog
command, e.g.
[131095440040] |for a local login, compared to
[131095440050] |for an SSH login.
[131095440060] |On my system, this works to extract it
[131095440070] |last
and w
could be helpful too, for example
[131095450010] |Lightweight X11 alternative available?
[131095450020] |Is there any lightwight X11 alternative suited for old systems?
[131095450030] |(Say, 1GHz and 256-314MB RAM)
[131095460010] |The only server implementations talking the X11 protocol I know of are XFree86 and X.Org.
[131095460020] |Note that X.Org is the server implementation shipped by most Linux distributions, due to licensing issues with XFree86.
[131095460030] |I don't see why those shouldn't run on your machine given those specs, provided that appropriate graphics drivers are available.
[131095460040] |Judging by the tags you're using Gentoo, so you should be able to just install X.Org by running emerge xorg-x11
and waiting for it to finish compiling (which might take a while on an old machine like this).
[131095460050] |You probably won't be able to run modern desktop environments like Gnome or KDE though, especially given the memory limitations.
[131095460060] |I would give Xfce a try, or perhaps LXDE.
[131095470010] |If you can, do yourself a favor and invest into more memory; the is nothing which beats real memory.
[131095470020] |However, I've seen XFCE running with xUbuntu 8.04 and 256 MB with 800 Mhz - and I would recommend using lean software with it: Opera instead of Firefox/Thunderbird, Abiword instead of OpenOffice, no monitors (disk/net activity, whether plugin, ticker here, ticker there, gaijm+xchat+skype+...).
[131095470030] |Sometimes closing an app to run another will be helpful.
[131095470040] |In the 90ies I ran KDE on a 64MB machine with 233 Mhz, with X of course, but it was pre-YouTube time. :)
[131095480010] |First, the big caveat: I think X with a lightweight desktop environment is really going to be your best bet for desktop hardware, because a) it includes wide hardware support, including 2D and 3D acceleration on a lot of old graphics cards, b) it's not really that awfully heavyweight, and c) all X programs will just work.
[131095480020] |But there are alternatives.
[131095480030] |These generally work by running directly on the Linux framebuffer console, possibly via directfb.
[131095480040] |Some options here would be:
[131095480050] |Android-x86: a port of Google's phone/embedded OS to PC hardware.
[131095480060] |Linux kernel, but not necessarily a Unix-like userspace.
[131095480070] |Qt QWS: embedded version of the popular toolkit (apparently KDE is even partly ported)
[131095480080] |GTK-DFB: a similar thing for GTK
[131095480090] |SDL forget all those "toolkits", with their "widgets" and "sophisticated support libraries" and "convenience"!
[131095480100] |Write your graphics as directly as possible, since SDL has direct framebuffer support
[131095480110] |But, depending on your hardware, all of that trouble might not really get you anything, because it won't necessarily be faster.
[131095480120] |And you'll have to find ports of anything you want to run, or port it yourself.
[131095490010] |Lightweight X11 => (Xvesa+jwm)
[131095500010] |The XFree86 implementation of the X server includes TinyX, which is part of many small linux distributions e.g. Damn Small Linux or embedded linux distributions.
[131095500020] |TinyX perfectly fits you requirements.
[131095510010] |Ubuntu 10.0 sound muted automatically on ThinkPad
[131095510020] |Hi,
[131095510030] |My laptop is ThinkPad T61, with Ubuntu 10.10 installed.
[131095510040] |The sound works all well, until I plugged the Creative XiFi Go USB sound card.
[131095510050] |After I reboot my computer, the internal speaker is muted automatically.
[131095510060] |I have to use 'alsamixer' to unmute it each time.
[131095510070] |How can I persistent this setting?
[131095520010] |The command you want to execute is:
[131095520020] |amixer sset Master 50%
[131095520030] |To have it run when you login, add it to System >Preferences >Startup Applications
as the command.
[131095520040] |To have it run at boot time, add it to your /etc/rc.local
file.
[131095530010] |Why is there a discrepancy in disk usage reported by df and du?
[131095530020] |I have a Linux(CentOS) server, the OS+packages used around 5GB.
[131095530030] |Then, I transferred 97GB data from a Windows server to two folders on this Linux server, after calculated the disk usage, I see the total size of the two folders is larger than the disk used size.
[131095530040] |Run du -sh on each folder, one use 50GB, the other one use 47GB
[131095530050] |But run df -h, the used space is 96G. (50GB + 47GB + 5GB) >96GB
[131095530060] |Is there any problem?
[131095530070] |Those two folders contain lots of files(1 million+).
[131095530080] |Thanks.
[131095540010] |This page gives some insight on why they have different values, however it seems to suggest that your du
size should be the smaller of the two.
[131095540020] |df
uses total allocated blocks, while du
only looks at files themselves, excluding metadata such as inodes, which still require blocks on the disk.
[131095540030] |Additionally, if a file is deleted while an application has it opened, du
will report it as free space but df
does not until the application exits.
[131095550010] |When du
is larger than df
, the usual reason is "sparse blocks": if a program doesn't actually write to a disk block but instead seeks past it, it gets a zero pointer in the inode's block allocation map and no actual disk space is reserved for it.
[131095550020] |If you later write to it, an actual disk block will be allocated and the map will be changed to point to the new block.
[131095560010] |Script to list only files of type ASCII text in the current directory?
[131095560020] |How to write a shell script which searches the current UNIX directory and returns the names of all files of type ASCII text?
[131095570010] |Exec 'file' on all the files in the current directoy, and then grep for 'ASCII':
[131095580010] |find . -type f -print0 | xargs -0 file | grep ASCII
[131095580020] |On CentOS 5, ASCII can mean a lot of things such as "ASCII C++ program text", "ASCII English text", and "ASCII text" so you might need to narrow it down more.
[131095590010] |The best of 2 worlds: Avoids the use of the useless xargs
, and speeds things up, since the +
triggers parallel invocation.
[131095600010] |How to Fix Choppy Video Playback in Ubuntu?
[131095600020] |On Ubuntu 10.04 I experience choppy video playback.
[131095600030] |I am running Mplayer and have an Nvidia GeForce 9800 GTX+ video card.
[131095600040] |I have already installed the libvdpau1 library.
[131095600050] |I don't know if hardware acceleration is enabled on my video card or if it is supported.
[131095600060] |Can anyone provide suggestions on how to decrease the choppiness?
[131095600070] |Here is my xorg.conf file:
[131095610010] |The reason you have an xorg.conf
with all those settings is that you use the proprietary nvidia
driver and the GUI tools that come with it; my Intel &ATI graphics (with open source drivers) don't need any xorg.conf
settings anymore.
[131095610020] |Now, about the choppiness:
[131095610030] |what sort of video are you trying to play (resolution, codec, ...)?
[131095610040] |does mplayer actually use vdpau?
[131095610050] |(I'm pretty sure it will say that somewhere in the output you get when you start it in a terminal.)
[131095610060] |is your PC doing other things at the time you try to play this?
[131095610070] |are you playing this from a local drive or over a network? (wired/wireless?)
[131095620010] |Are you sure your mplayer is configured with vdpau support?
[131095620020] |If unsure, add this ppa to your sources and install the mplayer package: ppa:rvm/mplayer
[131095620030] |To make sure you're using VDPAU, try this (assuming ALSA sound here, YMMV):
[131095620040] |VDPAU uses hardware-accelerated playback, so the CPU shouldn't be really busy while playing a video file.
[131095630010] |Dump Page table layout (KERNEL CONFIG)
[131095630020] |While configuring kernel for debugging found this option:
[131095630030] |CONFIG_X86_PTDUMP: Export kernel pagetable layout to userspace via debugfs
[131095630040] |Does this mean RAM page-table layouts ? any guides on how to use debugfs and view this layout ?
[131095640010] |Take a look at the following:
[131095640020] |Page table management
[131095640030] |Dumping kernel page tables
[131095650010] |How to view bad blocks on mounted ext3 filesystem?
[131095650020] |I've ran fsck -c on the (unmounted) partition in question a while ago.
[131095650030] |The process was unattended and results were not stored anywhere (except badblock inode).
[131095650040] |Now I'd like to get badblock information to know if there are any problems with the harddrive.
[131095650050] |Unfortunately, partition is used in the production system and can't be unmounted.
[131095650060] |I see two ways to get what I want:
[131095650070] |Run badblocks in read-only mode.
[131095650080] |This will probably take a lot of time and cause unnecessary bruden on the system.
[131095650090] |Somehow extract information about badblocks from the filesystem iteself.
[131095650100] |How can I view known badblocks registered in mounted filesystem?
[131095660010] |Try
[131095670010] |How to change title in gnome-terminal profile
[131095670020] |I know how to load a specific terminal profile as well as loading a config file that the terminal reads, but every time I try to set the title and save the config file, it still returns to the default.
[131095670030] |I can only change the window title for that session and only with in the menus.
[131095670040] |If I try something like
[131095670050] |gnome-terminal --title="MyTerminal" this brings up a terminal, but the title is still at its default.
[131095670060] |So, how do I change the title from the command line and within a config file.
[131095680010] |I just answered a very similar question here: http://askubuntu.com/questions/30988/how-do-you-set-the-title-of-the-active-gnome-terminal-from-the-command-line/31004#31004
[131095680020] |Basically, you can set the title in your ~/.bashrc file, but you need to change PS1 environment variable so it doesn't override you by automatically setting the title/icon-name itself.
[131095680030] |Take a look at the instructions I posted there, and if you can't figure it out from there, or run into any problems, let me know and I'll walk you through it.
[131095690010] |Looking over the way gome-terminal works, it looks like you need to do a couple things:
[131095690020] |Create a new profile, go into Edit -> Current Profile -> Title and Command
[131095690030] |Select the option to Keep/Prepend/Append the shell-supplied title (to suit)
[131095690040] |Run the command gnome-terminal --title="Wheeee" --profile="The New Profile"
[131095690050] |It appears as though the config-file saving is really for session saving (i.e. it stores all your open windows), and it does not save any command-line provided titles, so you can get what you want via a command-line + profile, but not via the config file.
[131095690060] |I've taken the liberty of reporting the lack of command-line option saving in the save-config switch against G-T at https://bugzilla.gnome.org/show_bug.cgi?id=645207
[131095700010] |How to verify hardware acceleration settings on Nvidia GeForce 9800
[131095700020] |How can I verify whether hardware acceleration is available and whether it is enabled for my video card.
[131095700030] |If it makes a difference I am using Ubuntu 10.04.
[131095710010] |If you don't already have it, install glxinfo
; in APT it's part of mesa-utils
:
[131095710020] |Run glxinfo
and look for a line about direct rendering
(another term for hardware acceleration):
[131095710030] |If it says "Yes", hardware acceleration is enabled
[131095720010] |fedora like ubuntu
[131095720020] |I remember ubuntu like a system which had every repository that I could enable and download apps.
[131095720030] |But now in fedora I can use only her repository and it's all.
[131095720040] |Do you know if I can change my fedora to be more like ubuntu?
[131095730010] |you can download rpm from sites like rpmfusion and rpmfind etc.RPM Fusion Repo
[131095730020] |or repositry information can be manually entered &configured in /etc/yum.repos.d/rpmfusion.repo
[131095730030] |once a repo is added to yum, you can enable/disable using "yum --enable-repo" or "yum --disable-repo"
[131095740010] |how to assign another modifier to Alt key for X11?
[131095740020] |Aim
[131095740030] |I would like to assign Alt to CapsLock-key, and Meta to Alt-key.
[131095740040] |But in such way, Alt-key would no longer be recognized as Alt, and CapsLock-key would no longer be recognized as CapsLock.
[131095740050] |openSUSE 11.4
[131095740060] |Previously
[131095740070] |openSUSE 11.1 -- since I am the only user of my computer I "simply" edited the /usr/share/X11/xkb/keycodes/xfree86 file and it worked without problem.
[131095740080] |Keys were wired to their symbols at the lowest level.
[131095740090] |Problems
[131095740100] |1. xkb
[131095740110] |I created a variant of Polish layout (pl_ext) which (for test) consists of such entries:
[131095740120] |However this does simply nothing, CapsLock-key in xev is recognized (symbol) as Alt_L, but when I press CapsLock-key it behaves like CapsLock (e.g. assuming you have File in menu, Alt+F should open this menu, it does not).
[131095740130] |Read more in edits.
[131095740140] |QUESTION: how to define a layout file to set CapsLock-key as Alt, and Alt-key as Meta?
[131095740150] |2. xfree86
[131095740160] |But now this does not work, I guess other file table is read instead of xfree86.
[131095740170] |QUESTION: how to find out which keycode table file is used by system (X/Gnome)?
[131095740180] |3. xmodmap
[131095740190] |This part is solved thanks to Gilles.
[131095740200] |Half of success here.
[131095740210] |This part works as desired:
[131095740220] |Now, I have trully CapsLock-key which is mapped to Alt.
[131095740230] |But this:
[131095740240] |does strange thing. xev shows that Alt-key is mapped to Meta, but when I press Alt+F (this should be mapped to Meta+F --> doing nothing) the File menu is opened.
[131095740250] |What's more, when I press Alt+Tab, I get window-switcher (I should not -- Alt is Meta now).
[131095740260] |QESTION: how to "delete" old behavior for Alt-key?
[131095740270] |Summary
[131095740280] |Answering any question would (hopefully) solve my problem, however I prefer using xkb entirely because I could then pack all the files for xkb and change layout in one place.
[131095740290] |Thank you in advance for any help!
[131095740300] |EDITS
[131095740310] |(1) ad. xkb
[131095740320] |Half of success here too!
[131095740330] |Now I have such entries
[131095740340] |and this works as desired.
[131095740350] |This does not:
[131095740360] |Alt-keys are recognized as Meta by xev, but I still can open the menus with Alt+F, switch windows, and I shouldn't.
[131095740370] |And on the other hand I cannot enter any national character, and I should.
[131095740380] |(2) ad. xfree86
[131095740390] |The best option for me -- editing keycode tables -- solved!
[131095740400] |See below (in answers).
[131095750010] |(This answer is about xmodmap only.
[131095750020] |I'm sure it's possible to do this with XKB, I just don't know how.)
[131095750030] |Modifiers and keysyms are assigned independently.
[131095750040] |But you get strange effects if you don't set them consistently.
[131095750050] |I think all you're missing is the add
command to assign a modifier to Meta_L
, though you may also need to clear and reassign the modifier keys.
[131095750060] |You may replace Mod1
and Mod2
by Mod3
, Mod4
and Mod5
: they are interchangeable, just make sure you don't use one for two different purposes.
[131095760010] |Editing keycodes approach
[131095760020] |I found the answer on Polish Ubuntu forum.
[131095760030] |Now one does not edit "xfree86" file but "evdev" file.
[131095760040] |This way you can make permanent changes which work for all layouts.
[131095760050] |Example (on Gilles request) -- log in as root, go to
[131095760060] |make a backup of "evdev" file, and then edit it.
[131095760070] |Change the codes to your liking, for example in my case:
[131095760080] |You can find what the codes by executing command "xev", but you can simply look at original "evdev" file.
[131095760090] |Log out, log in, enjoy your new keyboard :-).