Why does `man -k cron` say `vixie-cron (rpm)` for some results, and how do I read those pages?
[131048880020] |I'm trying to find out some details regarding cron and /etc/cron.d/ on a RedHat system.
[131048880030] |Ironically, my answer is viewable within the vixie-cron-*.src.rpm for this package, but the the man pages for cron don't match the text which is provided by the vixie-cron RPM.
[131048880040] |If I search my MANPATH for the keyword cron, it shows me some results like vixie-cron (rpm).
[131048880050] |What does this (rpm) tag mean, why are these manpages not installed on the system and can I view those manpages?
[131048880060] |Note that there is no page for vixie-cron, and the vixie-cron package doesn't actually provide any manpages named vixie-cron
[131048890010] |The command man -k queries against a pre-compiled database and not the manual pages themselves.
[131048890020] |I suspect that entries may have been made in the database (see man mandb for details) for pages that don't actually exist.
[131048890030] |I am not familiar enough with the RPM mechanisms to know how this could have happened.
[131048890040] |In a similar vein, there is considerable flexibility in what section a given manual page may claim to live.
[131048890050] |For example, on my system man Carp claims to be in section "3perl" where the underlying file is stored in .../man3/Carp.3perl.gz.
[131048890060] |The commands
[131048890070] |all yield the same page while man -s 3junk Carp complains that there is no such entry.
[131048890080] |You might find mlocate (a.k.a. locate) to be useful for hunting files by name.
[131048890090] |I presume it is available for RedHat since redacted@redhat.com is credited as the author.
[131048900010] |How do I add MP3/etc support to my *nix desktop?
[131048900020] |Almost every desktop *nix seems to ship without support for MP3 or other popular codecs that you really need.
[131048900030] |What is the easiest way to add support for these codecs?
[131048910010] |Unfortunately, I think this is likely to be very distro specific; in general there's probably a package in your package manager that provides the capability, you just have to figure out what the name of it is
[131048910020] |On Gentoo there are global use flags:
[131048910030] |
mp3 -- Causes a package to depend on media-sound/lame for MP3 encoding
[131048910040] |
mad -- Causes a package to depend on media-libs/libmad for MP3 decoding
[131048920010] |In Debian based distributions, such as Ubuntu, you first have to enable their non-free package repositories through your-favorite-package-manager.
[131048920020] |In synaptic open Settings >Repositories and make sure all boxes are checked.
[131048920030] |In apt-get things are a bit more tricky.
[131048920040] |You'll have to track down the URL for your distro's non-free repository and add it to your /etc/apt/sources.list.
[131048920050] |Then install your-favorite-mp3 lib, or simply reinstalling your-favorite-media-player.
[131048920060] |If that doesn't work, just install VLC, any other media players you have running should pick up and run with the mp3 libraries it depends on.
[131048930010] |The mp3 codec is patented through Fraunhoffer AG in the United States (Patent 5,579,430).
[131048930020] |Deployment of a decoder requires a very small royalty be paid to Fraunhoffer for use of mp3 tech. Use of the codec without payment subjects the creator(s) of the installation to the liability of patent infringement.
[131048930030] |
For paid *nix installs, this shouldn't be a problem; it's assumed it will be included in the price.
[131048930040] |
For free *nix installs, this can be toxic to ship in a "official" manner.
[131048930050] |Being a "free" install, there would be a burden to cough up royalty money every time someone did a download.
[131048930060] |Kinda makes it difficult to be "free".
[131048930070] |Expect answers to differ on how to best approach this.
[131048930080] |The free installs tend to take the "not here, go over there, nudge nudge wink wink" approach.
[131048930090] |There is typically a repository that is "unofficially" maintained by volunteers (always in a country where the patent doesn't apply), which you will need to enable in your local installation to gain access to.
[131048930100] |Visiting the website or FTP directory for these repositories usually shows an up-front disclaimer that states "If you're in a country that has patents on mp3 tech, you are liable for use of these, yada yada..."
[131048930110] |If there is no repository system (such as apt-get or yum) then you're left to your own devices to download the required binaries and/or source, and install them.
[131048930120] |One such installation would be LAME, which also provides an mp3 encoder.
[131048930130] |Debian's approach is rather novel; they ship the toolame library, which uses the non-patent-encumbered mp2 (mpeg audio layer 2) format, which was a precursor to mp3 (mpeg audio layer 3).
[131048930140] |The advantage to this is that the file format works interchangeably with mp3 players, without any effort or incompatibility.
[131048930150] |The disadvantage of this is that mp2's are not as well-compressed, so the files tend to be about 10% larger than the same audio compressed as an mp3.
[131048930160] |Unfortunately, toolame never really seemed to catch on..
[131048940010] |Our *nix's always recommend free formats over the restricted ones... see the Ogg Vorbis format (lossy) or FLAC (lossless).
[131048940020] |But if you must have your non-free format supported here are guides for a few *nixs
[131048940030] |
Ubuntu
[131048940040] |Ubuntu has a detailed guide for installing restricted formats.
[131048940050] |In particular for recent Ubuntu versions it is as simple as opening the Terminal, and executing the following command:
[131048940060] |
[131048940140] |The OpenBSD FAQ recommends installing LAME and states that "Lame is included in the OpenBSD ports tree."
[131048940150] |
MP3 Support included
[131048940160] |There are some Linux distros like Slackware that include MP3 support by default.
[131048950010] |What is the difference between Context output format and Unicode output format while taking diff?
[131048950020] |What is the difference between Context output format and Unicode output format when taking a diff?
[131048960010] |Apparently you've misread the manual.
[131048960020] |The -u flag is for unified context, not Unicode and -c is for copied context, not 'Context format':
[131048960030] |-c -C NUM --context[=NUM] Output NUM (default 3) lines of copied context.
[131048960040] |-u -U NUM --unified[=NUM] Output NUM (default 3) lines of unified context.
[131048960050] |The most straightforward way to find out what is the difference, is to try it out:
[131048960060] |Do you get what's the difference?
[131048970010] |Control bandwidth with iptables
[131048970020] |How can I control bandwidth in RHEL 5 using iptables?
[131048980010] |You can't by iptables alone.
[131048980020] |You should "mark" packets in the mangle table and then apply QoS with the tc program.
[131048980030] |Take a look here for a rather comprehensive documentation on how to do it.
[131048990010] |Meaning of "rc5" in "linux kernel 2.6.37-rc5"
[131048990020] |When I visited the kernel.org website to download the latest Linux kernel, I noticed a package named 2.6.37-rc5 in the repository.
[131048990030] |What is the meaning of the "rc5" at the end?
[131049000010] |Release Candidate.
[131049000020] |By convention, whenever an update for a program is almost ready, the test version is given a rc number.
[131049000030] |If critical bugs are found, that require fixes, the program is updated and reissued with a higher rc number.
[131049000040] |When no critical bugs remain, or no additional critical bugs are found, then the rc designation is dropped.
[131049010010] |linux python scripts
[131049010020] |Hi all,
[131049010030] |I have some python scripts (*.py) in linux.
[131049010040] |How can I make those scripts so that windows user can use and execute them???
[131049020010] |Hmm, are they not executable at the current moment?
[131049020020] |Are there any errors?
[131049020030] |What's wrong?
[131049020040] |First of all: does Windows even have Python installed?
[131049020050] |Open up a Command Prompt and type in python.
[131049020060] |If you get into a Python interpreter shell, you do have it.
[131049020070] |Next, to run the files, you have to cd into their directory and just run python file.py.
[131049020080] |If you were to give more details, maybe I could help a bit more?
[131049030010] |Getting the Dell XPS 16 Synaptics touchpad to work after hibernation in Ubuntu.
[131049030020] |I am currently experimenting with using Ubuntu as a my main OS instead of Windows 7.
[131049030030] |So far, pretty much everything is working fine, except for an issue that I am having with my touchpad.
[131049030040] |I have a Dell XPS 16 (1640) with a Synaptics touchpad.
[131049030050] |It works out of the box, but it seems that it stops working after returning from hibernation mode.
[131049030060] |This problem has also been addressed in an earlier bug: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/317270 but judging from this bugreport, it should be fixed by now.
[131049030070] |I am running the 2.6.35-23-generic-pae kernel.
[131049030080] |Any ideas?
[131049040010] |Can anybody recommend an HTTP debugging proxy?
[131049040020] |I would like something that allows me to:
[131049040030] |
Inspect all HTTP(S) traffic between my computer and the Internet, including 127.0.0.1
[131049040040] |
Modify incoming or outgoing data
[131049040050] |
It would also be nice if it had a scripting subsystem for setting rules and events
[131049040060] |
I prefer it be a GUI application.
[131049040070] |Please do not answer with WireShark.
[131049040080] |I am aware of WireShark and I have used it many many times and it's a great app.
[131049040090] |I would like something that restricts it's captures to the application layer and HTTP(S) traffic only and ignores the other Internet Protocol Suite layers.
[131049040100] |Also, it doesn't have some of the features I listed above.
[131049050010] |Here are a couple:
[131049050020] |
[131049060010] |How do I use redirection with sudo
[131049060020] |Possible Duplicate:Redirecting stdout to a file you don't have write permission on
[131049060030] |Yeah I could (and probably will) just escalate to root, but I'd like to know why this doesn't work?
[131049060040] |sudo is configured to be able to run any command, I've placed no restrictions on it.
[131049070010] |By
[131049070020] |sudo is capable of running any command
[131049070030] |do you mean that conceptually sudo can be configured to run any command or that you have configured sudo on the system in question as, roughly, = ALL?
[131049070040] |Have you tried sudo -u 0 cat ...?
[131049070050] |This would force sudo to execute the command as root, provided that your user id is allowed to do so.
[131049070060] |What does sudo -l print?
[131049080010] |This fails because the redirection >>is always done before the execution of the command regardless of the command.
[131049080020] |In this case, the shell is running as you (not root) and tries to append to the .../config file using your current permissions, not root's, and fails before sudo even runs.
[131049080030] |A common metaphor for doing what your command intends is:
[131049080040] |(assuming that you have read permission for .mplayer/config).
[131049080050] |Because /home/griff/... is opened by tee in the root context of sudo, it has root permissions to write that file.
[131049080060] |I'm not wild about this approach as it copies the contents of .mplayer/config to the standard output - along with appending it to griff's file - but it does work.
[131049090010] |As answered by mswbefore, this happens because >>happens before the actual command execution and does not run with the elevated sudo privileges.
[131049090020] |An alternative way to do this, is to wrap the entire command in another bash command shell:
[131049090030] |This will start a new bash shell with sudo privileges and closes it after executing the command.
[131049100010] |Emacs cperl mode - how to use tabs for indentation instead of spaces
[131049100020] |Is there a way to make cperl mode in emacs use all tabs for indentation instead of spaces?
[131049100030] |I've tried setting indent-tabs-mode, and cperl-tab-always-indent.
[131049100040] |Here is my .emacs file:
[131049110010] |The right answer is not to use tabs.
[131049110020] |But ok, just for the sake of knowing how it's done…
[131049110030] |CPerl uses the default Emacs settings for tab usage, and the Emacs default is to use tabs.
[131049110040] |So you're already getting tabs.
[131049110050] |Note that the default amount of indentation is 2 spaces, and the default tab width is 8 columns, so you need at least 4 levels of indentation to see a tab.
[131049110060] |If you want to change the tab width to 2 columns, set the tab-width variable, but note that your files will look strange to other people with a different tab width.
[131049110070] |If you want to change the amount of indentation per level to 8 columns, set cperl-indent-level.
[131049110080] |If you exchange files with other people, it's best to put these settings in a file variable (and not to use tabs, of course).
[131049110090] |For example:
[131049110100] |I think the equivalent vi modeline is # vi: ts=8 sw=8:.
[131049120010] |Adding numbers from the result of a grep
[131049120020] |I run the following command:
[131049120030] |and I get the following result:
[131049120040] |I'd like to add each of the numbers up to a running count variable.
[131049120050] |Is there a magic one liner someone can help me build?
[131049130010] |That doesn't print the list but does print the sum.
[131049130020] |If you want both the list and the sum, you can do:
[131049140010] |Try piping the output from your grep into
[131049150010] |This can all be done in awk as well:
[131049160010] |You appear to be using GNU system, so if Perl regular expressions support is available, you could write something like this:
[131049160020] |P.S.
[131049160030] |I modified the regular expression (added the + quantifier) to allow numbers >9.
[131049160040] |P.S. Alternatively, awk is sufficient (assuming GNU awk):
[131049170010] |Linux -> linux remote X login/desktop
[131049170020] |What are the (best) solutions for remote desktop in linux?
[131049170030] |Ideally I'd like to be able to be able to log in to a remote X (KDE) session without even logging into my local machine.
[131049170040] |Maybe if I could have a remote X session forwarded to a different virtual terminal session so I can switch back and forth between local and remote with Ctrl + alt + n?
[131049170050] |This is going to be over the internet via a VPN, so data-light solutions would be best =]
[131049180010] |I haven't tried it myself, but I think you might be looking for xpra.
[131049180020] |You will have to log in to an X Server locally to use it, but you should be able to set it up so your local X Server has a seperate workspace which connects to and mirrors an xpra workspace hosted remotely.
[131049180030] |http://jkwarren.info/blogs/index.php/2009/09/09/favorite-new-toy-xpra
[131049190010] |I believe that simple X forwarding will be too slow for what you want to do, so you'll have to choose between other protocols like VNC (thereis plenty of implementations), RDP (rdesktop) or NX.
[131049190020] |I would recommend NX as it is based on X, is very fast and even provides sound and file transfer.
[131049190030] |FreeNX is easy to setup and as it is based on X (just compressed and sshed during transfer) you should be able to integrate it like you want to your locale machine.
[131049200010] |Make find show slash after directories?
[131049200020] |How can I make the find command show a slash after directories?
[131049200030] |For example, I want dir to show up as dir/ instead of dir.
[131049200040] |I'm using find . -print
[131049210010] |This uses the printf command to format directory names and standard print for the rest.
[131049220010] |Maybe
[131049220020] |is an option as well.
[131049220030] |From a Solaris man page:
[131049220040] |-F Marks directories with a trailing slash (/), doors with a trailing greater-than sign (>), executable files with a trailing asterisk (*), FIFOs with a trailing vertical bar (|), symbolic links with a trailing "at" sign (@), and AF_UNIX address family sockets with a trailingequals sign (=).
[131049220050] |Follows symlinks named as operands.
[131049230010] |Portably:
[131049230020] |If you're willing to list directories and files separately (you can merge the output by sorting):
[131049230030] |With GNU find, see Shawn J. Goff's answer.
[131049230040] |If you're willing to risk non-printable characters being mangled even when not outputting to a terminal, see ddeimeke's answer.
[131049230050] |In zsh: print -rl -- **/*(DM) (D to include dot files, M to add a / after directories)
[131049240010] |How to remove duplicate entries in 'Open With' Nautilus dilogue?
[131049240020] |Would be nice to be shown the magic button that will help me remove this eyesore:
[131049240030] |It's Nautilus 2.30 on Debian (and has been there in previous versions as far as I can remember).
[131049250010] |If you look in ~/.local/share/applications and /usr/share/applications you can remove duplicates from those two places.
[131049250020] |That did it for me.
[131049260010] |You can also look in Gnome's "Applications" menu editor and remove duplicates from here.
[131049270010] |Kind of a shot in the dark, but have you tried clicking on one of the entries, and clicking the Remove button as seen in your screenshot?
[131049270020] |If this removes all of the entries, you could just re-add it.
[131049280010] |This list gets created by analyzing .desktop files located at:
[131049280020] |There might be more than one usecase per application, take for example the media player banshee which has three .desktop files by default:
[131049280030] |The only difference between those files is the starting parameter and the MimeType list.
[131049280040] |
banshee-1.desktop: General media files
[131049280050] |
banshee-1-audiocd.desktop: Audio CD's
[131049280060] |
banshee-1-media-player.desktop Audio player (Also used by rhythmbox, vlc, and others)
[131049280070] |So we have three 'Banshee Media Player' in the 'Open with' list (and maybe also in the 'Main Menu').
[131049280080] |The other way of filling this space is by creating personal .desktop files in ~/.local/share/applications.
[131049280090] |Either manually or by using a tool. alacarte (or right-click on 'Main Menu' -> 'Edit Menu') is one of those.
[131049280100] |Every time you create or move an application within alacarte, a new .desktop file gets placed inside ~/.local/share/applications.
[131049280110] |Disabling an application will "remove" it from the 'Main Menu', but not from the 'Open with' list.
[131049280120] |But the 'Delete' button does, by creating a identical copy from /usr/share/applications into ~/.local/share/applications and adding Hidden=true to the .desktop file, thus "overwriting" the system-wide inherited values.
[131049280130] |Deleting two of those entries from alacarte results in:
[131049280140] |Removing any entries from ~/.local/share/applications will reverse to the preexisting state (three banshee items).
[131049280150] |If you really don't have any duplicates in those two folders, try removing any duplicates from alacarte or playing with the Hidden=true option in the corresponding .desktop files.
[131049290010] |zsync vs. jigdo
[131049290020] |What's the difference between zsync and jigdo?
[131049290030] |Their core functionality seems the same, so I'm curious on things like performance, file size, and ease-of-use.
[131049290040] |Would also be interesting to know why one got created when one already existed.
[131049300010] |
[131049300030] |For a new project, using actively developed software will have many advantages over frozen software.
[131049300040] |What will your project need in a year, and will Jigdo provide that new functionality?
[131049300050] |Now, I couldn't find evidence to back this up so someone please correct me.
[131049300060] |I believe that:
[131049300070] |
jigdo allows clients to download chunks from multiple, different mirrors, similar to bittorrent.
[131049300080] |This greatly reduces the load on the central mirror.
[131049300090] |
zsync is designed for a single, central mirror.
[131049310010] |jigdo can be difficult to use.
[131049310020] |In today's world, if you need distributed file distribution (boy, doesn't that sound just plain odd) it is probably best to use BitTorrent.
[131049310030] |It may not do all that great for updates, or incremental improvements, but it's a helluva way to distribute the load across the entire Internet, of course given enough participation.
[131049310040] |If what you need is a means by which to do incremental updates, perhaps as a central mirror to fan-out sites, or even from a collection of mirrors to end-users, I'd recommend using rsync's daemon mode to expose modules that can be synchronized.
[131049310050] |Honestly, what tool to use depends on your audience, though I can say that if your audience are not a patient bunch, jigdo is most likely out of the question entirely.
[131049320010] |How to get started with CentOS?
[131049320020] |I've been working with Debian GNU/Linux for a long time and am very proficient with it.
[131049320030] |However for a new project I've got to got familiar with CentOS ASAP.
[131049320040] |So my Question is: How do I get started (from a SysAdmin POV) with CentOS ASAP?
[131049320050] |Please remember that I'm an experienced SysAdmin, I just have no experience with RPM based Distributions in general and especially CentOS and looking for good resources to get that missing knowledge.
[131049330010] |There is lots of help available from the CentOS website ranging from forums to documentation.
[131049330020] |If you are an experiance Sys Admin you may simply want to browse through the documentation for the release you're using and just go through areas that are differant to what you are used to (such as RPM system).
[131049330030] |Documentation can be found here for 5.x versions: http://www.centos.org/docs/5/
[131049340010] |VISUAL vs EDITOR what's the difference?
[131049340020] |I generally set both VISUAL and EDITOR environment variables to the same thing, but what's the difference? why would I set them differently? when developing apps why should I choose to look at VISUAL before EDITOR or vice versa?
[131049350010] |The EDITOR editor should be able to work without use of "advanced" terminal functionality (like old ed or ex mode of vi).
[131049350020] |Usually it was used on teletype terminals.
[131049350030] |A VISUAL editor could be a full screen editor as vi or emacs.
[131049350040] |E.g. if you invoke an editor through bash (using C-xC-e), bash will try first VISUAL editor and then, if VISUAL fails (because terminal do not support a full-screen editor), try EDITOR.
[131049350050] |Nowadays, you can left EDITOR unset or set it to vi -e.
[131049360010] |find-command for certain subdirectories
[131049360020] |Let's say I have a directory dir with three subdirectories dir1 .. dir3.
[131049360030] |And inside I have many files and other subdirectories.
[131049360040] |I'd like to search for a file inside, say with a *.c ending, but I'd only like to search in subdirectory "dir/dir2" and all its subdirectories.
[131049360050] |How can I formulate that?
[131049360060] |Assuming I'm in dir/ I have:
[131049360070] |find . -name "*.c"
[131049360080] |to search in all directories.
[131049360090] |How do I restrict to only dir2?
[131049370010] |Find will accept any valid path so
[131049370020] |should do the trick
[131049370030] |If the dir directory is /home/user/dir you could give find the full path
[131049380010] |Assuming you are in dir
[131049380020] |of course Iain'sanswer is also correct
[131049390010] |You can do find dir2 -name '*.c'
[131049390020] |You could also do (cd dir2; find -name '*.c')
[131049390030] |If you wanted to look at dir1 and dir3 but not dir2, you could do find {dir1,dir3} -name '*.c'
[131049400010] |You could also use the -path parameter of find in place of -name:
[131049400020] |This could allow you to find files in dir2 even if dir2 were not a direct subdirectory, E.G:
[131049410010] |Missing Xutf8LookupString call in Solaris 10
[131049410020] |Hi there,
[131049410030] |I'm trying to run a program in Solaris 10 that fails due to the lack of Xutf8LookupString function.
[131049410040] |It is a well-known issue but I'd like to know a way to "skip it".
[131049410050] |Would defining a different non-utf8 LC_CTYPE help?
[131049410060] |Would installing a different X server help?
[131049410070] |The problem doesn't happen on OpenSolaris?
[131049410080] |Is there a way I can "update" my Solaris system to use the same X libs as an OpenSolaris one?
[131049410090] |Thanks!
[131049420010] |The function is in libX11 - changing X servers won't make a difference.
[131049420020] |Without seeing the program's source, we can't guess if changing locale settings would stop it from calling the function, but changing locale won't stop the linker from trying to find it in a library.
[131049420030] |The only way to get a libX11 for Solaris 10 or older with that function in is to build libX11 yourself.
[131049420040] |It won't be fully compatible with existing X binaries though.
[131049420050] |Sun/Oracle have never backported the new libX11 from OpenSolaris/Solaris 11 to the older releases.
[131049430010] |Ubuntu - How do you free up resources?
[131049430020] |What steps do you take to make a vanilla Ubuntu system run faster and use less memory?
[131049430030] |I'm using Ubuntu as the OS for my general purpose PC, but it's on slightly older hardware and I want to get as much out of it as I can.
[131049430040] |Short of leaner distros, what things do you do to make the standard Ubuntu OS run a bit faster for basic web browsing and word processing?
[131049430050] |I'm hoping this can be a good guide for tweaking an Ubuntu system for speed.
[131049440010] |Many Distributions offer what is called a Just Enough Operating System, or JeOS.
[131049440020] |How you go about installing these varies from distro to distro.
[131049440030] |Under Debian based distributions, such as Ubuntu, if you use a Server Install ISO, you can install the JeOS by pressing F4 on the first menu screen to pick "Minimal installation".
[131049440040] |Many distributions also provide Netinstall or USB boot install mediums that, because of limited resources, provide very striped down base systems to be built upon.
[131049450010] |Other people may be able to give examples of other things that will make a bigger difference or may have fancy system tweaking tips but the thing that springs to my mind is to change the desktop.
[131049450020] |It is relatively easy to try out XFCE and/ or LXDE as your desktop - just install them via Synaptic and select them at the log in screen.
[131049450030] |Both of these have, at least as far as I am concerned, a good balance between functionality and geegaws.
[131049450040] |If you find out that you really want the prettinesses of Gnome, at least you know the size of the trade off you are making.
[131049460010] |Ubuntu is fairly lean to begin with.
[131049460020] |Linux tends to have a "chunky" feel with the UI.
[131049460030] |This is a function of X as well as using non-optimized graphic drivers and such.
[131049460040] |I replaced Win 7 with ubuntu on a netbook, and it used like 1/4 of the memory out of the box, and generally ubuntu doesn't install a ton of stuff in the background, and nothing you'd really consider crapware.
[131049460050] |Also, keep in mind.
[131049460060] |Linux will eat up your memory, but that's going to be cache.
[131049460070] |Its not really "using" it.
[131049460080] |When programs need that memory, it will free it up.
[131049460090] |The total memory use is misleading.
[131049460100] |I'd suggest checking if there are graphics drivers written for your specific hardware.
[131049460110] |If you want to try a lighter OS, http://www.xubuntu.org/ is a good way to go.
[131049460120] |If that feels sluggish, then its something with your hardware/linux interface.
[131049460130] |I ran that on a budget thinkpad from 2000, which was designed for win 98 and barely run windows 2000, and ran fine with a fairly recent version of xubuntu.
[131049460140] |On top of that, I donated it to somebody who is completely computer illiterate, and she had no trouble with it (I had already installed the basics plus the flash plugin).
[131049470010] |If you're not averse to editing text files, you may find openbox a suitable desktop environment/window manager.
[131049470020] |It's extremely lean and provides most modern features — other than a GUI configuration menu: all configuration is done with a shell script and an XML file.
[131049470030] |I personally find this preferable; the XML file is well documented inline and is organized in pretty much the way I'd expect it to be.
[131049470040] |Further documentation is available in the configuration guide.
[131049470050] |That being said, some features like panels etc. may not be suitable to your tastes.
[131049470060] |Openbox itself doesn't provide a panel or applets, but it implements a standard which many "independent" panel apps support.
[131049470070] |I actually stopped using panels when I switched to openbox.
[131049470080] |I find that the info from a few wmaker applets is enough for me.
[131049470090] |These applets serve much the same purpose as panel applets, providing sensor readouts and such, but have far lighter memory demands. aptitude search ~n^wm will get you a list of them (and some other stuff).
[131049470100] |You may need to invest a bit of time into learning openbox, but it will ultimately get you a more efficient system in terms of memory and CPU utilization.
[131049480010] |How I do this is have a look at top, sort by memory usage, and look at what seems useless.
[131049480020] |Here's an example scenario of a GNOME user who's always connected to a single LAN point:
[131049480030] |
you don't need Network Manager; it only start to be really useful if you have a more advanced networking setup (EG: bluetooth, wi-fi, 3G)
note that by doing this, you would also have got rid of other daemons
[131049480090] |As others have noted, you might also try other Desktops (XFCE and LXDE), which will drastically help in terms of memory usage.
[131049480100] |Note also that Firefox is a memory hog, and if you not dependent on all those useful plugins, try alternatives, like Epiphany.
[131049480110] |If you depend on Firefox (or any other memory hog), avoid opening dozens of tabs.
[131049480120] |Use bookmarks instead, if you want to keep a tab on things (pun not intended).
[131049480130] |This process might take some time, and I encourage you to experiment, and whenever you encounter a daemon you don't recognize, check it's man page or Google it, to ensure that you won't be screwing your system up.
[131049480140] |Other services that I can think of, and you might not be using are apache, ssh-server, and (in the case of Ubuntu) ubuntu-one.
[131049490010] |Is it possible to have vim key bindings in terminal?
[131049490020] |I'm getting used to vim bindings (like pressing w to go to word, dw to delete a word, and such) and it's modes (insert, normal, visual), and, out of curiosity would like to know: is there some kind of implementation of this behaviour of modes and bindindgs from vim to my terminal?
[131049490030] |Have insert mode, normal mode and such...
[131049490040] |Thanks!
[131049500010] |It has insert and normal mode (the insert mode is default, and escape for normal mode) but no visual mode.
[131049500020] |In bash: set -o vi You can run it at the command line for just this session or add it to your .bashrc file.
[131049500030] |Many programs use readline for input, and you can make any of them use vi-style keybindings by setting up your .inputrc with
[131049500040] |In zsh, if you change your EDITOR environment variable, the shell will match it.
[131049510010] |How do I add an entry in /etc/fstab for a windows share?
[131049510020] |I want to be able to mount our file server's file share (on a Mac OS X server, shared via AFP and Windows File Sharing) on my Ubuntu 10.10 linux laptop.
[131049510030] |I want to be able to mount it as my normal user, and be prompted for the password each time.
[131049510040] |What do I add to /etc/fstab to make this happen?
[131049510050] |I know I did it before, but I forgot how now.
[131049510060] |EDIT: The share in question is called "G4 320", and I am trying the following line in fstab:
[131049510070] |But I'm getting the following via dmesg:
[131049510080] |CIFS VFS: cifs_mount failed w/return code = -6
[131049510090] |EDIT2:
[131049510100] |As requested, more debug info.
[131049510110] |Output of dmesg with my fstab line:
[131049510120] |Output of dmesg with the credentials line from Michael:
[131049510130] |/var/log/messages seems to have no useful information.
[131049510140] |EDIT3: OK.
[131049510150] |Thanks again to Michael I almost have it!
[131049510160] |If I put the following in /etc/fstab then it works:
[131049510170] |However:
[131049510180] |
I do not want my password in there...
[131049510190] |
I now need to use sudo to mount the share.
[131049510200] |How can I resolve those two issues?
[131049520010] |The filesystem is the Windows shared path, and the type is CIFS:
[131049520020] |options can be all the usual mount options.
[131049520030] |You probably need to provide some sort of credentials; you can provide user and password options, or use credentials=/path/to/credentials/file and store username=... and password=... lines in that file (keep in mind that /etc/fstab is world-readable)
[131049530010] |The line in /etc/fstab I eventually used was:
[131049530020] |What solved the issue of not being prompted for the password as well as credentials= not working was installing mount.cifs via:
[131049530030] |Just like Michael Mrozek I assumed I had mount.cifs installed or else I wouldn't be able to mount CIFS shares, but apparently the kernel will use it's own internal code to mount unless it finds mount.cifs
[131049540010] |Block motion for any bracket type?
[131049540020] |If I want to, say delete a block, I can use text object motions.
[131049540030] |I.e if my text looks like this:
[131049540040] |And the cursor is for example on character 3.
[131049540050] |If I type diB then :x 3 :y 4 would be deleted If it is daB then both the block and surrounding brackets are deleted :{:x 3 :y 4}
[131049540060] |So, the pattern is:
[131049540070] |Where operation may be:
[131049540080] |
d - delete
[131049540090] |
c - change
[131049540100] |
y - copy ...
[131049540110] |inclusion is either:
[131049540120] |
[ or ] and <or >for their own respective brackets, etc..
[131049540180] |Now, the question is: Is there a block motion for the inner-most block with brackets of any of these types?
[131049540190] |I'd like to be able to do da? with ? being the motion I'm looking for.
[131049540200] |And if the cursor in my example above is within {} say on 3 I'd delete just the {} but if my cursor was on b I'd delete the [] block, etc.
[131049550010] |Not by default, but there may be some mechanism to add that functionality.
[131049550020] |In visual.txt, the section about operating on the visual area, it has this:
[131049560010] |Here is why the question should have been on SO : non trivial scripting is required...
[131049560020] |NB: I've reused function from lh-vim-lib -- BTW, there is a little bug in the version of lh#position#char_at_pos() in conf: col() must not be used.
[131049570010] |What is the difference between procfs and sysfs?
[131049570020] |What is the difference between procfs and sysfs?
[131049570030] |Why are they made as file systems?
[131049570040] |As I understand it, proc is just something to store the immediate info regarding the processes running in the system.
[131049580010] |What is the difference between procfs and sysfs?
[131049580020] |proc is the old one, it is more or less without rules and structure.
[131049580030] |And at some point it was decided that proc was a little to chaotic and a new way was needed.
[131049580040] |Then sysfs was created, and the new stuff that was added was put into sysfs like device information.
[131049580050] |So in some sense they do the same, but sysfs is a little bit more structured.
[131049580060] |Why are they made as file systems?
[131049580070] |Unix philosophy tells us that everything is a file, therefore it was created so it behaves as files.
[131049580080] |As I understand it, proc is just something to store the immediate info regarding the processes running in the system.
[131049580090] |Those parts has always been there and they will probably never move into sysfs.
[131049580100] |But there is more old stuff that you can find in proc, that has not been moved.
[131049590010] |sysfs is the Virtual Filesystem created during the 2.6 Kernel release cycle to show device information as procfs did not do this type of information that well.
[131049590020] |Memory etc has not been ported to sysfs as it was never intended to show that type of information so it is unlikely it will be ported at all.
[131049600010] |Find command: Searching for executable files.
[131049600020] |What type of parameter/flag can I use with the unix find command so that I search executables?
[131049600030] |(if this question is better suited for another stackexchange forum, I welcome you telling me so)
[131049600040] |p.s.
[131049600050] |If you know of one, I've been looking for a detailed (non-beginner) tutorial/screencast about grep and/or find.
[131049610010] |Rather unsurprisingly, -executable.
[131049610020] |From the man page:
[131049610030] |Matches files which are executable and directories which are searchable (in a file name resolution sense).
[131049610040] |This takes into account access control lists and other permissions artefacts which the -perm test ignores
[131049620010] |Portably, the following command looks for regular files that are executable by their owner:
[131049620020] |With GNU find ≥4.3, you can use -executable instead of -perm -700 to look for files that are executable by you.
[131049630010] |Why does it makes sense to cache swap?
[131049630020] |It sounds quite counter-productive to me to cache pages that are swapped out.
[131049630030] |If you swap pages in, what is the advantage to first cache them in memory, only to have to than move them to the right place?
[131049630040] |Even if pages are swapped in proactively, doesn't it make more sense to "just" swap them in?
[131049630050] |Doesn't in fact caching swap is just a waste of resources?
[131049640010] |Never even heard of this one and it doesn't make a lot of sense to me either.
[131049640020] |I still think the good old rule of making sure it just doesn't swap at all (except of course the reservations) is best.
[131049640030] |Now, theoretically, the caching of stuff in direct available memory which takes a while to load from something as slow as a disk, has an apparent advantage.
[131049640040] |But if you're able to cache to-be-swapped pages, why again swap in the first place?
[131049650010] |After some more research, I have found that the term SwapCached in /proc/meminfo is misleading.
[131049650020] |In fact, it relates to the number of bytes that are simultaneous in memory and swap, such that if these pages are not dirty, they do not need to be swapped out.
[131049660010] |What do I need to be aware of if I want to write an application that will run on any Linux distro?
[131049660020] |I'm planning on writing an app that I would like to be able to run on any Linux installation without having to rewrite any of the code in doing so (except maybe the interface, GNOME vs KDE, etc).
[131049660030] |I'm not very experienced in the minutiae of the differences between distros, and I also can't provide details about the project as it's only just entered the planning stage other than it's going to be poking around deep inside the kernel in order to interact with as much of the computer's hardware as possible.
[131049670010] |Distros differs mostly in packaging and application defaults/configurations.
[131049670020] |Every code which runs in a determinated architecture should run on every distro for that architecture.
[131049670030] |Also you can easily run GNOME apps in KDE and vice-versa, so you can choose one that fits best you/your userbase and you're done!
[131049680010] |Some points to keep in mind when developing,
[131049680020] |
Use a standard build system
[131049680030] |
Avoid hard coding library paths
[131049680040] |
use tools like pkg-config to find the external packages instead.
[131049680050] |
If your application has a GUI, use some frameworks like wxWidgets which can render native UI elements depending on where you run.
[131049680060] |
Avoid creating dependencies with packages which won't run on other distributions.
[131049680070] |The only way to fully ensure your application works on all distributions is to actually run and test on it.
[131049680080] |One way you could do this is by creating virtual machines for each distributions.
[131049680090] |VirtualBox can be used to do this.
[131049680100] |I have around 8 virtual machines on my box for this kind of testing.
[131049680110] |I think you can't generalize too much on deploying the application as each distribution uses different way of installing packages.
[131049680120] |Debian uses deb and fedora rpm.
[131049690010] |Just my 2c, but I have had less headaches with applications that either come with packages in the official repositories or that are compiled from source.
[131049690020] |Applications that are distributed as 3rd party binaries tend to suffer from some dependency issues.
[131049690030] |I will usually need to track these down and resolve them manually.
[131049690040] |So, if I were to release a Linux app, I would either work to package it and get it into the official repositories.
[131049690050] |Otherwise, I would distribute it in source form and have the user compile it for their system.
[131049700010] |Write to Posix standards and don't bother with gui.
[131049710010] |The main thing is choosing a language.
[131049710020] |What language will this be run in?
[131049710030] |If you really want to run on any linux distro, you could write it in Python.
[131049710040] |Any python app that will run on linux will (basically) run on any linux distro with 0 modifications.
[131049710050] |Python also has really nice GTK and Qt binders.
[131049710060] |I've never worked with gtk, but PyQt is really great to work with.
[131049710070] |The benefits to python is that you'll probably not need to compile any extensions (it totally depends on what you're writing though.
[131049710080] |Even if you do need to, it's pretty easy.) and you also have a great distribution source via pypi.
[131049710090] |Installing python programs from there is usually even easier than the distro package repository.
[131049720010] |If you're writing for non-embedded Linux, the main thing to keep in mind is that different distributions will have a different collection of library versions.
[131049720020] |So you should set a sufficiently old baseline.
[131049720030] |As Debian updates slowly, Debian stable (or oldstable when it exists, in the few months after a release) tends to be a reasonable choice.
[131049720040] |You'll need to package separately for each distribution.
[131049720050] |If your application is open source and at all successful, you can count on someone picking it up and contributing the packaging, so it's not an essential skill.
[131049720060] |Other than packaging, the differences between distributions mostly affect system administration, not development or daily use.
[131049720070] |If you're going to patch the kernel, you'll have to test with more distributions as each distribution has their own patches that could cause incompatibilities and each distribution has userland settings that may rely on different sets of kernel interfaces being available (e.g. requirements for some things not to be modules).
[131049720080] |Note that what I wrote above is not true if you want your application to work on embedded systems (here meaning anything that's not a server, desktop or laptop), which even when they run a Linux kernel often don't have the usual libraries, starting with Glibc being eschewed in favor of µClibc, dietlibc, Bionic, etc.
[131049730010] |I've found the Linux Standards Base to be helpful, especially when your application includes services (daemons).
[131049730020] |See some of these sites:
[131049730030] |
[131049730060] |But if I had to be limited to just resource, it might be the File Hierarchy Standard.
[131049740010] |Providing /bin and /lib inside a chroot jail
[131049740020] |I need to be able to provide the /bin and /lib directories inside a chroot jail so that programs can dynamically link properly.
[131049740030] |Is there a way to accomplish this without making a copy of the /bin and /lib dirs to the chroot jail?
[131049740040] |I've tried symlinks, and they don't work from inside chroot jails, and directories can not be hardlinked.
[131049750010] |You could use mount to remount the directories you need in your jail:
[131049750020] |For use in /etc/fstab:
[131049750030] |Cheers!
[131049760010] |If you didn't want to mount the directories as jgr said, you can use cp to recursivly copy directories and create hardlinks for all files:
[131049760020] |This way your chroot's /bin and /lib can have slightly different structure / contents than the main directories.
[131049770010] |$VAR vs ${VAR} and to quote or not to quote
[131049770020] |I can write
[131049770030] |the end result to me all seems about the same.
[131049770040] |Why should I write one or the other? are any of these not portable/POSIX?
[131049780010] |VAR=$VAR1 is a simplified version of VAR=${VAR1}.
[131049780020] |There are things the second can do that the first cant, for instance reference an array index or count the length of the value.
[131049780030] |See the More on variables section of the Bash Guide for Beginners.
[131049780040] |Using quotes around a variable as in VAR="$VAR1" and VAR="${VAR}" is usually a good idea.
[131049780050] |This makes the contents of the variable an atomic unit.
[131049780060] |If the variable value contains spaces and you don't quote it, then each word is considered an argument in whatever you're doing.
[131049780070] |On portability: According to POSIX.1-2008 section 2.6.2, the curly braces are optional.
[131049790010] |${VAR} and $VAR are exactly equivalent.
[131049790020] |For a plain variable expansion, the only reason to use ${VAR} is when parsing would otherwise grab too many characters into the variable name, as in ${VAR1}_$VAR2 (which without braces would be equivalent to ${VAR1_}$VAR2).
[131049790030] |Most adorned expansions (${VAR:=default}, ${VAR#prefix}, …) require braces.
[131049790040] |In a variable assignment, field splitting (i.e. splitting at whitespace in the value) and pathname expansion (i.e. globbing) are turned off, so VAR=$VAR1 is exactly equivalent to VAR="$VAR1", in all POSIX shells and in all pre-POSIX sh that I've head of. (POSIX ref: simple commands).
[131049790050] |For the same reason, VAR=* reliably sets VAR to the literal string *; of course VAR=a b sets VAR to a since the b is a separate word in the first place.
[131049790060] |There are two other places where the double quotes are unnecessary: redirection targets (>$filename is as good as >"$filename") (only in scripts though, not in most interactive shells) and the word to match in case statements.
[131049790070] |You do need the double quotes in other cases, in particular in export VAR="${VAR1}" (which can equivalently be written export "VAR=${VAR1}").
[131049790080] |The similarity of this case with simple assignments, and the scattered nature of the list of cases where you don't need double quotes, are why I recommend just using double quotes unless you do want to split and glob.
[131049800010] |Practical tasks to learn shell scripting.
[131049800020] |I'm looking for some common problems in unix system administration and ways that shell scripting can solve them.
[131049800030] |Completely for self-educational purposes.
[131049800040] |Also I'd like to know how would you go about learning shell scripting.
[131049810010] |Any time you EVER find yourself doing something multiple times, script it.
[131049810020] |Think as lazy as you possibly can.
[131049810030] |Computers were built to do all of that menial crap.
[131049810040] |Any thing that smells like busy work needs a shell script.
[131049810050] |Personally, I learned by rummaging around in Slackware for a couple of years.
[131049810060] |See what happens when you strip your system back as much as possible.
[131049810070] |Learn to be comfortable with text.
[131049810080] |While everybody else is ooing and awing over NetworkManager, learn how simple it is to make your own damn NetworkManager.
[131049810090] |Sure, it might not have as many use cases, but you can get something up and running, dynamically connecting via ethernet and wireless on-demand pretty simply enough.
[131049820010] |How to learn it: Fall in love with the command-line.
[131049820020] |Use it regularly and pull up man pages often.
[131049820030] |Frequently, even.
[131049820040] |When I was first learning scripting, I couldn't count how many times I typed man bash.
[131049820050] |I also couldn't count how many times I pulled up the man page for another command.
[131049830010] |I learned it by writing a monitoring tool.
[131049830020] |It would connect to a bunch of machines via ssh and collect data like uptime, load, number of active connections, memory utilization and stuff like that.
[131049830030] |On my local machine it would show me that data as a text table.
[131049840010] |I would like to re-recommend the three books that I suggested in another thread, these are in my opinion the best books to get into the spirit of Unix:
[131049840020] |
The Unix Programming Environment from Kernighan and Pike
[131049840030] |
Unix for the Impatient
[131049840040] |
O'Reilly's Unix Power Tools.
[131049840050] |The first one is old, very old, but it is concise, a short read and will give you the shell chops that you need (regular expressions, sed, pipelines).
[131049840060] |The second one is incredibly entertaining.
[131049840070] |The third one is a collection of "best of" tricks from the Unix masters in the 90's (That is when I read it).
[131049840080] |The book keeps getting re-edited, so I am sure it contains many new nuggets.
[131049850010] |I second Miguel's recommendation of 'The Unix Programming Environment'.
[131049850020] |Its really old but its how I learned almost everything I know about the shell and because its so old you can get it for just a few bucks on amazon: http://is.gd/eiSn6
[131049860010] |Find a book or a manual and treat your chosen shell like a programming language, because it is.
[131049860020] |(Well, maybe not csh...)
[131049860030] |For starters, learn how to figure out if you're in a Bash shell, Bourne shell, csh, zsh, or whatever.
[131049860040] |Some of these are similar to eachother like C and C++ -- deceptively different -- so knowing which one you're fighting with will help you find examples and manuals that actually will help in a given situation.
[131049870010] |There is a wealth of great information in the Advanced Bash-Scripting Guide, and it's frequently updated to stay current.
[131049880010] |How to install kvm in Debian (Lenny) over powerpc
[131049880020] |I am emulating powerpc using qemu-system-ppc on top of x86, and running debian-lenny-ppc with it.
[131049880030] |I want to install kvm inside that debian.
[131049880040] |I have learned that the kvm and qemu-kvm packages are not available for the powerpc architecture.
[131049880050] |I have found two packages (kvm-source and [Edit1]qemu) and installed it, but I don't know how to proceed with it further.
[131049880060] |How do I install kvm on powerpc?
[131049880070] |Do I need to cross-compile it as well, as given on http://www.linux-kvm.org/page/PowerPC_Host_Userspace?
[131049880080] |[Edit1]: Approach1: I downloaded qemu-kvm source code (http://sourceforge.net/projects/kvm/files/qemu-kvm/0.13.0/qemu-kvm-0.13.0.tar.gz/download) , configured
[131049880090] |But I am getting this error:
[131049890010] |Hi, As Gilles suggested, why dont you try the details in the PowerPC_KVM link.
[131049890020] |They have described the whole procedure there.
[131049890030] |Added a Document on KVM on PowerPC .
[131049890040] |Thanks, Sen
[131049900010] |Undocumented goodies in Mac OS X defaults?
[131049900020] |What are your favorite and most useful undocumented settings accessible via the defaults utility on Mac OS X?
[131049910010] |Rather than listing one, I'll just point you to the biggest database of them I know and recommend their preference pane for easier tweaking of these hidden knobs.
[131049920010] |Can't register or update Solaris 11 Express
[131049920020] |Trying the x86 live CD, installed it on a VirtualBox virtual machine, network connectivity and guest additions are OK.
[131049920030] |The "Register" link (which they imply is needed to get updates) gets me to an error page.
[131049920040] |The browser goes to
[131049920050] |https://inventory.sun.com/RegistrationWeb/register/urn:st:5b620481-ea10-e3c8-f16a-99bfff4e8eac?product=OracleSolaris&version=11&locale=en_US
[131049920060] |And then to
[131049920070] |https://inventory.sun.com/RegistrationWeb/OracleSolaris/default/en_US/register-login.jsp
[131049920080] |Which gives me a "Not Found" page.
[131049920090] |The update manager says there are no updates.
[131049920100] |I imagine it'd say otherwise if I was able to register the OS.
[131049920110] |But I can't. Any clues?
[131049920120] |(Fun fact: "Add More Software" and "Update Manager", if called, like the proverbial goggles, did nothing... until I tried to run them from the console and saw a message that "root password was expired".
[131049920130] |Aha.
[131049920140] |The "root shall never login" ideology was preventing them from running.
[131049920150] |OK, I gave root a password and was able to install much-needed software like gcc.)
[131049930010] |Updates for Solaris 11 Express are only available if you purchase a support contract.
[131049930020] |There are no free updates at this time, so registering won't help there.
[131049930030] |The release notes, which the download page tells you to read before installing, warn you about the expired root password issue.
[131049940010] |Vim command for inserting a character
[131049940020] |I'm looking for the opposite of x.
[131049940030] |I want to insert just one character and stay in command mode.
[131049950010] |You could map a key(-sequence) to a command sequence, f.e.:
[131049950020] |Ctrl-i takes one character and returns afterwards.
[131049950030] |To make it persistent, add the same line to the local or global vimrc file:
[131049960010] |In some situations you can just use r.
[131049960020] |From :help r:
[131049960030] |Replace the character under the cursor with {char}.
[131049960040] |If you want more than one char, use R.
[131049960050] |(When used it enters in Replace mode.
[131049960060] |As usual, for more info, :help Replace).
[131049960070] |Remember to run vimtutor at least once to learn some commands.
[131049960080] |The r command is used in Lesson 3.2.
[131049970010] |Commandline e-mail client that syncs contacts with external server?
[131049970020] |I am planning to setup a groupware server that's either Citadel or SOGo, which supports the GroupDAV, CardDAV, or SyncML protocols.
[131049970030] |Is there a commandline e-mail client that supports syncing contacts via such protocols either out of the box or with a plugin/extension?
[131049980010] |According to the documentation of both of the software products you linked to (here and here) both support storing directory information using LDAP.
[131049980020] |If you do not find a command line email client that supports the protocols that you mentioned, you could try using LDAP instead.
[131049980030] |Every decent email client supports LDAP.
[131049990010] |What is the difference between shared memory in early unix systems vs modern unix systems?
[131049990020] |How could processes share memory in early versions of UNIX version modern implementations of shared memory?
[131050000010] |Very early UNIX systems did not have MMUs, and so effectively, all memory in the system was shared between all processes in memory.
[131050000020] |UNIX V7 was the first one that had memory management, AFAIK.
[131050000030] |The PDP-11 did not even have a MMU when it was released; see this PDF book, page 35.
[131050000040] |As time moved forward and MMUs became a commonplace thing, UNIX began to require it.
[131050000050] |And then memory could be separated between processes.
[131050000060] |In the 1980s we saw more IPC mechanisms, including shared memory managed by the OS (which was new in SVR1, circa 1983).
[131050000070] |SVR1 also introduced messages and semaphors, and the System V APIs are still available on modern systems for all three of these things.
[131050010010] |sh startup files over ssh
[131050010020] |I have some important commands I need to execute before any sh shell starts.
[131050010030] |This is required for passing SSH commands in the SSH command (ssh host somecommand) and other programs that run commands.
[131050010040] |In my .profile I have this:
[131050010050] |However, this fails:
[131050010060] |Notice the missing PATH options
[131050010070] |What is the proper name for the sh profile?
[131050010080] |Note: I do not have root access and don't want this applied to other users.
[131050010090] |Is there another way to do this?
[131050010100] |EDIT: It appears /bin/sh links to bash, which isn't surprising.
[131050010110] |What is surprising is that my profile is still ignored.
[131050010120] |Any suggestions?
[131050020010] |I run out of time to test this, but looking though the man pages I found:
[131050020020] |man bash: When bash is started non-interactively, to run a shell script, for example, it looks for the variable BASH_ENV in the environment, expands its value if it appears there, and uses the expanded value as the name of a file to read and execute.
[131050020030] |Bash behaves as if the following command were executed: if [ -n "$BASH_ENV" ]; then . "$BASH_ENV"; fi but the value of the PATH variable is not used to search for the file name.
[131050020040] |man ssh: ~/.ssh/environment Contains additional definitions for environment variables; see ENVIRONMENT, above.
[131050020050] |The combination suggests how you can have ssh execute your .profile
[131050020060] |Unfortunately my server has PermitUserEnvironment to the default value of no, which makes that this does not work for me (and like I said I don't have the time to play with it more).
[131050030010] |It seems worth noting that the command you mention in your question
[131050030020] |will pretty much never be useful.
[131050030030] |The variable substitution for $PATH is done by your local shell, and passed to ssh which executes echo on the remote system to print the contents of the path variable, as it expanded on your local system.
[131050030040] |Here is an example of me doing something similar between my Mac and a Linux machine on my network:
[131050030050] |Note how I needed to use quotes to prevent my local shell from expanding the variable.
[131050040010] |(removed... can only have one Hyperlink as new user ~)
[131050040020] |
Update
[131050040030] |Sorry, I have not seen that this is about a non-interactive session, to which the above link does not apply.
[131050040040] |When Bash starts in SH compatiblity mode, it tries to mimic the startup behaviour of historical versions of sh as closely as possible, while conforming to the POSIX® standard as well.
[131050040050] |The profile files read are /etc/profile and ~/.profile, if it's a login shell.
[131050040060] |If it's not a login shell, the environment variable ENV is evaluated and the resulting filename is taken as name of the startup file.
[131050040070] |After the startup files are read, Bash enters the POSIX(r) compatiblity mode (for running, not for starting!).
[131050040080] |Bash starts in sh compatiblity mode when:
[131050040090] |
the base filename in argv[0] is sh (:!: Attention dear uber-clever Linux users... /bin/sh may be linked to /bin/bash, but that doesn't mean it acts like /bin/bash :!:)
[131050040100] |So the question is, why doesn't it execute it, even though your shell is started like this.
[131050040110] |Source
[131050050010] |Usually upon login, bash reads commands from:
[131050050020] |~/.bash_profile ~/.bashrc
[131050050030] |From bash man page:
[131050050040] |~/.bash_profile The personal initialization file, executed for login shells
[131050050050] |~/.bashrc The individual per-interactive-shell startup file
[131050060010] |~/.profile is only executed by login shells.
[131050060020] |The program that calls the shell decides whether the shell will be a login shell (by putting a - as the first character of the zeroth argument on the shell invocation).
[131050060030] |It is typically not executed when you log in to execute a specific command.
[131050060040] |OpenSSH in particular invokes a login shell only if you don't specify a command.
[131050060050] |So if you do specify a command, ~/.profile won't be read.
[131050060060] |OpenSSH allows setting environment variables on the server side.
[131050060070] |This must be enabled in the server configuration, with the PermitUserEnvironment directive.
[131050060080] |The variables can be set in the file ~/.ssh/environment.
[131050060090] |Assuming you use public key authentication, you can also set per-key variables in ~/.ssh/authorized_keys: add environment="FOO=bar" at the beginning of the relevant line.
[131050060100] |Ssh also supports sending environment variables.
[131050060110] |In OpenSSH, use the SendEnv directive in ~/.ssh/config.
[131050060120] |However the specific environment variable must be enabled with an AcceptEnv directive in the server configuration, so this may well not work out for you.
[131050060130] |One thing that I think always works (oddly enough) as long as you're using public key authentication is to (ab)use the command= option in the authorized_keys file.
[131050060140] |A key with a command option is good only for running the specified command; but the command in the authorized_keys file runs with the environment variable SSH_ORIGINAL_COMMAND set to the command the user specified.
[131050060150] |So you can use something like this in ~/.ssh/authorized_keys (of course, it won't apply if you don't use this key to authenticate):
[131050060160] |Another possibility is to write a wrapper scripts on the server.
[131050060170] |Something like the following in ~/bin/ssh-wrapper:
[131050060180] |Then make symbolic links to this script called rsync, unison, etc.
[131050060190] |Pass --rsync-path='bin/rsync' on the rsync command line, and so on for other programs.
[131050060200] |Alternatively, some commands allow you to specify a whole shell snippet to run remotely, which allows you to make the command self-contained: for example, with rsync, you can use --rsync-path='. ~/.profile; rsync'.
[131050060210] |There is another avenue which depends on your login shell being bash or zsh.
[131050060220] |Bash always reads ~/.bashrc when it's invoked by rshd or sshd, even if it's not interactive (but not if it's called as sh).
[131050060230] |Zsh always reads ~/.zshenv.
[131050070010] |How do I modify the ci command in vim
[131050070020] |Just like "ci(" changes everything between parentheses, I want "ci$" to change everything between dollar signs (for editing LaTeX.) Is this possible?
[131050080010] |I don't think so; you'll have to have some way of telling vim whether you want to change between the dollar sign forward or back, and I don't think that it can be programatically determined.
[131050080020] |You can, however, do a "cf$" to change everything from where you are on the line until the next dollar sign, or "cF$" if you want to go to the previous dollar sign.
[131050090010] |The vim LaTeX box plugin adds this feature.
[131050090020] |If you didn't want to use the rest of the plugin, you could just look at the source and see how it is done.
[131050090030] |I think some of the other LaTeX plugins for vim probably have that feature too.
[131050090040] |There are at least four major ones, the LaTeX box one is the one I'm most familiar with.
[131050100010] |What are high memory and low memory on Linux?
[131050100020] |I'm interested in the difference between Highmem and Lowmem:
[131050100030] |
Why is there such a differentiation?
[131050100040] |
What do we gain by doing so?
[131050100050] |
What features does each have?
[131050110010] |As far as I remember, "High Memory" is used for application space and "Low Memory" for the kernel.
[131050110020] |Advantage is that (user-space) applications can't access kernel-space memory.
[131050120010] |This is relevant to the Linux kernel; I'm not sure how any Unix kernel handles this.
[131050120020] |The High Memory is the segment of memory that user-space programs can address.
[131050120030] |It cannot touch Low Memory.
[131050120040] |Low Memory is the segment of memory that the Linux kernel can address directly.
[131050120050] |If the kernel must access High Memory, it has to map it into its own address space first.
[131050120060] |There was a patch introduced recently that lets you control where the segment is.
[131050120070] |The tradeoff is that you can take addressable memory away from user space so that the kernel can have more memory that it does not have to map before using.
[131050120080] |Additional resources:
[131050120090] |
[131050130010] |The first reference to turn to is Linux Device Drivers (available both online and in book form), particularly chapter 15 which has a section on the topic.
[131050130020] |In an ideal world, every system component would be able to map all the memory it ever needs to access.
[131050130030] |And this is the case for processes on Linux and most operating systems: a 32-bit process can only access a little less than 2^32 bytes of virtual memory (in fact about 3GB on a typical Linux 32-bit architecture).
[131050130040] |It gets difficult for the kernel, which needs to be able to map the full memory of the process whose system call it's executing, plus the whole physical memory, plus any other memory-mapped hardware device.
[131050130050] |So when a 32-bit kernel needs to map more than 4GB of memory, it must be compiled with high memory support.
[131050130060] |High memory is memory which is not permanently mapped in the kernel's address space.
[131050130070] |(Low memory is the opposite: it is always mapped, so you can access it in the kernel simply by dereferencing a pointer.)
[131050130080] |When you access high memory from kernel code, you need to call kmap first, to obtain a pointer from a page data structure (struct page).
[131050130090] |Calling kmap works whether the page is in high or low memory.
[131050130100] |There is also kmap_atomic which has added constraints but is more efficient on multiprocessor machines because it uses finer-grained locking.
[131050130110] |The pointer obtained through kmap is a resource: it uses up address space.
[131050130120] |Once you've finished with it, you must call kunmap (or kunmap_atomic) to free that resource; then the pointer is no longer valid, and the contents of the page can't be accessed until you call kmap again.
[131050140010] |On a 32-bit architecture, the address space range for addressing RAM is:
[131050140020] |or 4'294'967'295 (4 GB).
[131050140030] |The linux kernel splits that up 3/1 (could also be 2/2, or 1/3) into user space (high memory) and kernel space (low memory).
[131050140040] |The user space range:
[131050140050] |Every newly spawned user process gets an address (range) inside this area.
[131050140060] |User processes are generally untrusted and therefore are forbidden to access the kernel space.
[131050140070] |Further, they are considered non-urgent, as a general rule, the kernel tries to defer the allocation of memory to those processes.
[131050140080] |The kernel space range:
[131050140090] |A kernel processes gets his address (range) here.
[131050140100] |The kernel can directly access this 1 GB of addresses (well, not the full 1 GB, there are 128 MB reserved for high memory access).
[131050140110] |Processes spawned in kernel space are trusted, urgent and assumed error-free, the memory request gets processed instantaneously.
[131050140120] |Every kernel process can also access the user space range if he wishes.
[131050140130] |To achieve this, the kernel maps an address from the user space (the high memory) to his kernel space (the low memory), the 128 MB mentioned above are especially reserved for this.
[131050150010] |Leaking file descriptors
[131050150020] |What does it mean if file descriptor leaking
[131050150030] |What does it mean?
[131050160010] |Those are file descriptors left open on the device (which you were resizing).
[131050160020] |lvm(8) says:
[131050170010] |Cryptsetup not finding libgcrypt.so after upgrade
[131050170020] |I am running a Fedora Core 13 with dm-crypt + luks, all standard with fedora distos.
[131050170030] |After upgrading libgcrypt.so, cryptsetup moans at boot up that it can not find the library where it expects it to be ( /lib ).
[131050170040] |So I booted with a live cd, decrypted the root partition of that drive, and mounted my old hard drive, then I chroot'ed in there.
[131050170050] |I ran cryptsetup and as expected it bombed out with the same error as when booting up...
[131050170060] |I fixed the symlinks, and tested cryptsetup.
[131050170070] |It worked.
[131050170080] |When I rebooted the same error was there again...
[131050170090] |I thought that somehow my changes was not being saved.
[131050170100] |So I booted back into the live OS, mounted the hdd &chroot'ed again, and to my amazement I found my symlink where I left it and cryptsetup working.
[131050170110] |Can someone tell me why this is happening?
[131050170120] |Why is cryptsetup still looking for the file in /lib and not finding it when its obviously there?!
[131050170130] |P.S.
[131050170140] |I have also tried making duplicates of the libgcrypt library files in /lib as well as tried recompling cryptsetup from source, it all works 100% untill I reboot and then that error again!
[131050170150] |I am thinkin' of just rebuilding that box, but I thought I'd ask you guys first?
[131050170160] |Anyone come across a similar issue?
[131050180010] |Try rebuilding your initramfs files.
[131050180020] |It's possible that a static copy of libgcrypt has been incorporated into the initial ram disk (to decrypt early disks, for example), and it is still the version prior to the package upgrade.
[131050190010] |How do you move files up 1 directory?
[131050190020] |So the question seems to be on my work mailing list right now... about how to move files up a single directory.
[131050190030] |I mean like a bulk move where there might be hundreds or thousands of files in the directory, and you might not be sure about whether there are dupes in ...
[131050190040] |What method would you use?
[131050190050] |How to handle dupes will vary, sometimes we'll overwrite, sometimes we need to be safer.
[131050190060] |IO can be important because these are production servers.
[131050190070] |But given quantity a prompt for non duplicate files isn't an option.
[131050190080] |Preservation of permissions, and timestamps, etc, is important.
[131050190090] |We usually won't know what the data is.
[131050190100] |Oh and it using mv isn't required, rsync, cp solutions welcome.
[131050190110] |note: we're running CentOS 5.5 so let me know if it won't work there due to it being a more recent... feature
[131050200010] |You can try
[131050200020] |which will overwrite dupe files in ..
[131050200030] |You can use mv -u '{}' to not overwrite if the dupe in .. is the same or newer
[131050210010] |I said on our ML
[131050210020] |obviously this isn't very safe... it will overwrite things.
[131050210030] |It might have limits that I've never run into.
[131050220010] |In this example will move files from '/parent/old-dir' to '/parent':
[131050220020] |By rsync rules it will replace dups with newer files from old-dir.
[131050230010] |The following is a python template that I have used to good effect in the past.
[131050240010] |I would recommend using rsync from the parent:
[131050240020] |which will backup all existing duplicate files in parent to file-original.
[131050250010] |mv -i only prompts if the destination exists.
[131050250020] |yes n | mv -i … moves all files that don't exist in the destination directory.
[131050250030] |On FreeBSD and OSX, you can shorten this to mv -n ….
[131050250040] |Note that neither of these will merge a directory argument with an existing directory in the same name in the destination directory.
[131050250050] |A separate issue is how to act on all the files in the current directory.
[131050250060] |There are two problems: grabbing all files (* omits dot files) and not running into a command line .
[131050250070] |On Linux (or more generally with GNU find and GNU coreutils):
[131050250080] |With GNU find but not GNU coreutils (or older GNU coreutils):
[131050250090] |Portably:
[131050250100] |As usual zsh makes things easier.
[131050250110] |It doesn't have a command line length limitation internally, so if you use its mv builtin you don't need to worry about that.
[131050250120] |And you can tell it not to ignore dot files with the D glob qualifier.
[131050250130] |Limitation: this doesn't work across filesystems (as of zsh 4.3.10).
[131050260010] |This will copy everything in the current directory to the directory above it, retaining all permissions, using hardlinks to minimize IO if possible, and on duplicates it creates filename~
[131050260020] |after that
[131050270010] |Shell command to read device registers?
[131050270020] |On a single-board computer running Linux, is there a way to read the contents of the device configuration registers that control hardware?
[131050270030] |I think it would be a wrapper for inw(), more or less.
[131050270040] |I'm looking for something equivalent to the U-boot memory dump (md) command, to be used in the context of driver debugging.
[131050280010] |I could be completely and totally wrong about this, and forgive me if I am, but if uboot's md command is just reading memory addresses mapped to device registers and returning the contents to you, couldn't you read those same memory locations with clever use of dd if=/dev/mem ...?
[131050290010] |I don't know if you can do it directly with a vanilla kernel.
[131050290020] |But it should be quite strait forward to write a simple driver that uses a "file" in /proc to export the memory content you would like to see.
[131050290030] |Then you can read your "file" with a simple script and have access to that memory.
[131050300010] |Is the PCI device configuration in /sys/bus/pci/devices/*/config of any help?
[131050310010] |What is an inode?
[131050310020] |Possible Duplicate:What is a Superblock, Inode, Dentry and a File?
[131050310030] |Documentation of Unix-file-systems contains often the term 'inode'.
[131050310040] |What is that and what does that?
[131050310050] |Why have DOS/Windows-filesystems no inodes (or do they)?
[131050320010] |"inode" is the informal term that refers to whatever on-disk chunk of data a Unix-file-system uses to hold the information pertaining to a single file.
[131050320020] |An "inode" traditionally holds the block numbers of disk blocks holding the file's actual contents.
[131050320030] |A directory entry traditionally held file name, file type, etc.
[131050320040] |The two chunks of data were separated.
[131050320050] |That said, a lot of Unixy-things fall out of it.
[131050320060] |Traditionally, a file name wasn't part of the inode.
[131050320070] |The file name came from a "directory", a file (which had it's own inode and contents) that matched up file name to inode number.
[131050320080] |A C-preprocessor macro allowed code to go from inode-number to disk block with very little calculation.
[131050320090] |So, many names could refer to the same inode.
[131050320100] |Hard links come out of this.
[131050320110] |So does the ability to have "." always refer to the current directory without trickery.
[131050320120] |A directory always contains "." and ".." filenames, which correspond to the inode numbers of the directory itself, and the directory in which the directory resides. "." and ".." filenames have entrys in each and every directory.
[131050320130] |They're not special cases in the filesystem code that don't really have data anywhere.
[131050320140] |The hierarchy of Unix files come out of the inodes.
[131050320150] |A disk is essentially a linear array of "blocks" of a certain size, 512 bytes, 1 kilobytes, 4 kilobytes, whatever.
[131050320160] |Inode 0 was always at a designated disk block, allowing the Unix kernel to find the root of a filesystem by just knowing "inode 0", and it could associate "/" with inode 9.
[131050320170] |Inode 0 was also a directory file. "usr", "bin", "tmp", "dev" etc had entries in inode 0.
[131050320180] |So the inodes allow mapping a linear list of blocks of data from a disk into a hierarchical structure.
[131050320190] |Inodes lived on-disk.
[131050320200] |In the original unix filesystem, the first third or quarter of a disk was inodes.
[131050320210] |The rest was data blocks, allocated to files as needed, and whose disk block numbers ended up in inodes.
[131050320220] |Various fileystems over the years (BSD FFS, for example) tried to take into account the actual physical geometry of the disk by putting zones of inodes at different places on the disk.
[131050320230] |The Winodws NT "NTFS" filesystem has something immediately analogous to an inode: entries in the master file table.
[131050320240] |NTFS seems to have inherited that from its ancestor, DEC VMS Files-11.
[131050320250] |Files-11 "file headers" appear almost identical to entries in $MFT.
[131050320260] |I'm not at all expert with other Windows/DOS filesystems (FAT, FAT32, etc etc), but the records in the File Allocation Table seem like analogs of a combined directory entry and inode to me.
[131050320270] |I imagine that combining naming and hierarcy along with disk block data is what makes FAT filesystems so fragile.
[131050320280] |You can't have a program comb through separate inode on-disk data and put the files it finds into "lost+found" - once the File Allocation Table gets corrupted, blocks membership in files data is lost, as well as files membership in directory.
[131050330010] |How to check which process is hogging the HDD?
[131050330020] |Possible Duplicate:How do I go about tracking down IO usage in Linux and other *nix OS'?
[131050330030] |You know that part of top that says waiting when something is doing a lot of hard disc access.
[131050330040] |The higher this value is, the worse your system performs.
[131050330050] |What can I use to display the worst offenders?
[131050340010] |How to properly escape exclamation points in bash?
[131050340020] |Today, I was caught redhanded while attempting to code golf a password generator for Twitter.
[131050340030] |90 chars.
[131050340040] |Since that is a lot of spare space, I decided to raise the bar and make it executable too.
[131050340050] |139 chars.
[131050340060] |Nice, except obviously bash chokes on the exclamation point.
[131050340070] |Pesky exclamation point.
[131050340080] |"Let's escape it," I thought!
[131050340090] |I do have one spare character after all.
[131050340100] |Obviously...
[131050340110] |Leaving my asinine code golfing aside -- I can't explain this.
[131050340120] |With \!, the exclamation point was escaped, except it really wasn't, because the \! was left as-is for echo to pick up.
[131050340130] |One solution could have been using \x21 instead, but I'm not convinced that's the proper way of escaping an exclamation point in a bash command.
[131050340140] |tl;dr: How do you properly escape an exclamation point in a bash command?
[131050350010] |I should have Googled before I asked.
[131050350020] |Since you do not depend on bash to expand variables [..] you could use single quotes instead.
[131050350030] |Strings in single quotes are not expanded by bash.
[131050350040] |honk in reply to How do I escape an exclamation mark?
[131050360010] |Use single quotes:
[131050360020] |The rules for ! were sort of grafted onto the other quoting rules afterwards (from csh).
[131050360030] |They were very useful back when shells didn't have command line editing, but some people still use them now.
[131050360040] |P.S.
[131050360050] |Since you're coding for bash:
[131050360060] |This works on most unices:
[131050360070] |(Not that I understand why you want to create a script or why the script name has to be two letters.)
[131050370010] |How to update Puppy Linux?
[131050370020] |A month or two ago, I installed the latest version of Puppy Linux to an old Eee PC which I hardly use any more.
[131050370030] |Well I'm on it now!
[131050370040] |But I can't figure out how to update it.
[131050370050] |It uses a weird Puppy package manager which only seems to have options for installing and uninstalling things.
[131050370060] |I found an option to update the database, but that didn't actually update any of the software on my system.
[131050370070] |I've looked through the menus several times and don't see anything that says update.
[131050370080] |How do I update Puppy Linux??
[131050380010] |Hi Ricket,
[131050380020] |Please go through this blog as i think it exactly describes what you need : how-to-update/upgrade-kernel-for-puppy-linux I think this site could also help you.. flash-puppy Another link which might help you is : Update from 4.1.2 to 4.2
[131050380030] |Note : Take a look at this site also : installing-puppy-linux-to-your-hard-drive
[131050380040] |Thanks, Sen
[131050390010] |rxvt and Inconsolata (a font)
[131050390020] |Is it possible to use the rxvt terminal emulator with Inconsolata?
[131050390030] |Is it possible to use any TrueType fonts with rxvt?
[131050390040] |I'd like to use rxvt but would love to use my own fonts.
[131050400010] |rxvt does not support TrueType, but there is a fork rxvt-unicode (or urxvt) which can.
[131050400020] |urxvt(1) gives two examples:
[131050410010] |Which mp3 tagging tool for Linux?
[131050410020] |Which app would you recommend for Linux to tag MP3s?
[131050410030] |Under Windows I used to use tag&rename and liked it a lot; it works well under Wine, but I want something that runs natively
[131050420010] |There are various:
[131050420020] |
Generally music players can also edit common tags, f.e. banshee, rhythmbox or amarok
[131050420060] |and a lot others, try searching your distributions repository and test some of them.
[131050430010] |I've been a dedicated user of Picard for quite some time.
[131050430020] |The only cross-platform tagger that feels truly solid.
[131050430030] |Don't forget MusicBrainz's huge music database.
[131050440010] |I use Ex Falso.
[131050440020] |It was installed with QuodLibet music player, so I gave it a shot.
[131050440030] |I've used it with single files, batches of files...
[131050440040] |It's pretty intuitive, and best of all, works very well.
[131050450010] |I like TagTool and id3tool.
[131050450020] |Both are for the command line.
[131050450030] |I also find Picard useful for music files that exist in the MusicBrainz database.
[131050460010] |keep duplicates out of $PATH on source
[131050460020] |I have the following code that's source-d by my .shellrc
[131050460030] |but if I make changes to other code and then source this file, my path continues to get longer and longer with each source, each time appending these when they're already there.
[131050460040] |What can I do to prevent this?
[131050470010] |You could put a test around the "append this directory to path" command which would check to see if foo is already in the path before adding it, but it wouldn't buy you much.
[131050470020] |First, the test itself would be costly compared to appending a duplicate element.
[131050470030] |Secondly, a redundant element later in the path has no effect upon what does get executed when you execute a given command because the first matching executable in the path will still be the one executed.
[131050470040] |Finally most shells cache prior path hits in a hash table so the second time you execute my_command the path isn't even searched.
[131050470050] |About the only thing that not appending redundant entries will get you is a prettier looking path, but most paths are pretty ugly to begin with.
[131050470060] |If this aesthetic goal is really important to you, tell us which shell you are using and I can conjure up a function to "append this to path only if it isn't present" function.
[131050480010] |The line for symbolic link canonicalization is optional.
[131050480020] |If you remove it, also remove the next line (if you want to keep nonexistent directories), or change it to
[131050480030] |Note that the symlink canonicalization method only guarantees unicity amongst directories that were added by this function.
[131050480040] |It also doesn't handle edge cases like an NFS directory mounted on two locations or a Linux bind mount.
[131050490010] |One thing you could do is use an environment variable as a guard.
[131050490020] |So set the env to ____path_added.
[131050490030] |In your script, you can then just test if that has been set before adding the path in.
[131050490040] |A bit like you would a C header guard.
[131050500010] |I use these functions that are sourced from an initialization script by fink on os x (so credit goes to the fink developers).
[131050500020] |They work great and I can re-source my .bash_profile whenever I want.
[131050500030] |Don't ask me how they work...
[131050500040] |I just know they do :)
[131050500050] |I can use them like so to append or prepend to $PATH or $MANPATH (they'll work with any variable formatted like $PATH):
[131050510010] |How can I run keychain in a way that only has the first shell prompt for keys on startup?
[131050510020] |I open multiple shell tabs when I start KDE, and I've just added keychain to my ~/.shellrc the problem is that all the tabs prompt for key passwords when I login.
[131050510030] |This is quite annoying to do this.
[131050510040] |Is there any good solution for this so that all the tabs simply start, and once I've logged into one tab, all of them have the keys loaded?
[131050520010] |Here are two methods:
[131050520020] |You can ensure that keychain only opens on one tab like this:
[131050520030] |But it may not be on the first tab you land on - you might have to hunt for it, which could be just as annoying.
[131050520040] |This works because mkdir is an atomic operation - only one script will succeed, and that one will display the prompt.
[131050520050] |Another way will display the prompt on all the tabs, but will quit them once you respond on any one of them.
[131050520060] |You can poll a file or use inotify-tools like this:
[131050520070] |This one presents the prompt, but first it starts a watcher to see if a file is deleted.
[131050520080] |After the prompt is satisfied, the file is deleted, and the watcher will kill any other prompts that are waiting. inotifywait is from inotify-tools; inotify is a Linux API.
[131050520090] |There may be a similar API on other Unices, but if not, you only need a loop that polls to see if the file is deleted.
[131050530010] |lookbehind and using it with grep in Vi?
[131050530020] |Trying to get into Vi (not Vim), after learning Vim.
[131050530030] |
Vim has a lookabehind like /\(Not this\)\@, how to do it in Vi?
[131050530040] |
If I want to search recurvively down directory in Vim, I could do :vimgrep /\(Not this\)\@!$ -r *, what about Vi?
[131050530050] |
if (1) and (2) are not available in Vi, how do you accomplish them?
[131050530060] |Please, create tag lookbehind and regex.
[131050540010] |According to vim help (:help pattern-overview), \@i is not supported by vi.
[131050540020] |It may be you can enumerate all the possible combinations to avoid this, or find a different expression.
[131050540030] |Alternatively I guess you could farm the job off to some external tool or interpreter like perl.
[131050540040] |I checked egrep -- it doesn't support it.
[131050540050] |What are you searching for?
[131050540060] |Out of interest, why are you preferring vi over vim?
[131050550010] |Hum...
[131050550020] |I usually end up installing the complete version of vim in $HOME, even on unix stations like Solaris stations, that only have the old vi at first.
[131050550030] |Otherwise, grep+egrep+$(), or even perl as Edd suggested, will do the trick from the shell.
[131050560010] |Can I share a device from under /dev across hosts?
[131050560020] |Here's the situation.
[131050560030] |I have a video device /dev/video0 on a VMware Server and I want to access this device from within a virtual machine.
[131050560040] |However for whatever reason I can't connect the device directly to the VM, it must be connected to the host.
[131050560050] |Since under unix philosophy everything really is just a file, can I share a device under /dev using NFS, Samba, sshfs or some other protocol between two hosts, so that a linux on one server can access devices on a different server?
[131050570010] |No.
[131050570020] |You can export a device file through NFS or some other network filesystems.
[131050570030] |But the meaning of the device file is dependent on the machine where you open it.
[131050570040] |If you export /dev/video0 over NFS from a server machine to a client machine, the client machine just sees “character device 81:0”, and interprets it as its own video capture device.
[131050570050] |The client machine doesn't even need to have the same device number assignment as the server; for example an OpenBSD client would see the same file as the pseudo-terminal driver, because that's what char 81:0 is under OpenBSD.
[131050570060] |What you're asking for would be very nice, but also very hard.
[131050570070] |Every request on the client would have to be forwarded to the server and vice versa.
[131050570080] |There would have to be specific support in individual drivers.
[131050570090] |For example some drivers rely on shared memory between the process and the kernel, and supporting transparently across the network would be hard and prohibitively expensive in many cases.
[131050570100] |I don't know if the video capture driver does use shared memory, but given that it's likely to transfer large amounts of data asynchronously, I expect it to.
[131050570110] |Linux has some specific support for network block devices.
[131050570120] |They do not rely on a network filesystem; the device file exists only on the client, and a daemon on the server emulates a physical block device (it might relay the operations to and from a real physical device, but often it reads and writes to an image file).
[131050570130] |You should look for a solution that's specific to video capture.
[131050570140] |Try to run as much of the data-intensive part on the machine to which the physical device is attached.
[131050570150] |Or find a virtual machine solution that supports direct access to the physical device from inside the virtual machine (I don't know if any host/guest solution does; hypervisor-based solutions are more likely to).
[131050580010] |In addition to Gilles answer - as long as you do not intend to do ioctls on file it is simply a stream.
[131050580020] |So if you ran from guest
[131050580030] |/dev/fakevideo0 will behave as a buffor so if you read from it you will get stream from camera.
[131050590010] |renaming a fat16 volume
[131050590020] |What's the easiest way to rename (change the volume label of) a fat16 volume (e.g. on a USB drive) from linux?
[131050590030] |It seems like mlabel from the mtools package is meant to do this, but the documentation is not geared to rapid assimilation.
[131050600010] |Try sudo mlabel -i ::, for example sudo mlabel -i /dev/sdb1 ::new_label.
[131050600020] |Reference: RenameUSBDrive on the Ubuntu community documentation.
[131050610010] |Is it possible to run KVM over a qemu emulated powerpc architecture
[131050610020] |I understand that qemu uses binary translation to emulate machines, so irrespective of the underlying architecture, it can provide emulation.
[131050610030] |And, KVM uses Hardware Virtualization technique to make this process faster.
[131050610040] |Thus, KVM requires VT support from underlying architectures (which x86 processor provides).
[131050610050] |I have emulated powerpc architecture with qemu over x86 architecture.
[131050610060] |My question is whether it is possible to run KVM over this powerpc architecture.
[131050610070] |Any explanation in the answers would be quite helpful.
[131050610080] |Thanks
[131050620010] |KVM uses hardware acceleration.
[131050620020] |Usually it provides support for emulating only itself (i.e. Intel VT-x emulates Intel processors etc.) and I would be highly surprised it PowerPC provided any emulation of Intel processors (as it would require duplicating Intel functionality in PPC processor largly increasing cost and size of such unit).
[131050620030] |However there are planned ports of KVM to PowerPC architecture which would allow to emulate PowerPC systems on PowerPC CPU efficiently.
[131050630010] |Why does the local::lib shell code use eval and $()
[131050630020] |using local::lib requires you to add a line to your ~/.shellrc
[131050630030] |I don't understand what the point of using eval, and encasing the statement in $() is.
[131050630040] |I also notices that csh doesn't require you to use those.
[131050630050] |So I'm wondering what the difference is, and whether or not I should use this for generic bourne shell, or zsh.
[131050640010] |perl -I$HOME/perl5/lib/perl5 -Mlocal::lib prints out some shell code.
[131050640020] |The point of eval $(…) is to execute that code in the context of the current shell.
[131050640030] |This is typically used to set environment variables.
[131050640040] |You can't use a subprocess for this as this would only affect the subprocess's environment.
[131050640050] |You can source a snippet:
[131050640060] |but that only works if the code that generates the variable values is written in shell.
[131050640070] |Here that code is written in Perl, so the Perl code generates shell code.
[131050640080] |dircolors uses the same technique, as do many other programs.
[131050640090] |The shell snippets are generally kept very simple, just variable assignments (with plain strings for values) and export statements, so they are compatible with all Bourne-style shells (including any POSIX shell) and zsh. local::lib is gratuitously incompatible with some decade-old systems as it combines export with assignment (which is permitted by POSIX but not by the original Bourne shell).
[131050640100] |Csh requires a different syntax; local::lib emits csh syntax if $SHELL contains csh as a substring.
[131050640110] |Under Windows, local::lib generates the equivalent cmd syntax; because cmd has no equivalent .