[131025490010] |
How to install Linux on a Lenovo ThinkPad T400
[131025490020] |I want to completely format my computer and replace my current OS with Linux.
[131025490030] |How should I go about doing this?
[131025490040] |Particularly, how do I handle finding all the device drivers I'll need?
[131025490050] |I'm using a Lenovo ThinkPad T400
[131025500010] |Take the safest road: Try CDlives of some distros to verify that everything is recognized.
[131025500020] |If succesful you can install to your hard disk without worrying.
[131025510010] |There is a lot of info on the Thinkpad T400, Linux certified sells a T400 with Linux pre-installed, your choice of Ubuntu or Fedora.
[131025510020] |Gentoo has a wiki for Thinkpads and so does Arch Linux.
[131025510030] |I think you should have a good experience with the T400.
[131025510040] |I would do as oposit suggests try a few live CDs, see what works and what you like and have fun :)
[131025520010] |Thinkpads are very popular with Linux users, so there's a lot of documentation out there.
[131025520020] |The standard resource for Thinkpad users is ThinkWiki.
[131025520030] |It's quite likely that a standard distro install will be sufficient.
[131025520040] |The Linux kernel probably includes all the drivers you need.
[131025520050] |Possible exceptions are the graphics drivers (look at the Nvidia or ATI websites for those) and in rare cases the wifi drivers.
[131025530010] |Be aware that recent thinkpads have a hidden partition (predesktop area) including all sorts of tools for system recovery and the like which can be accessed by hitting the blue Thinkvantage button during boot up.
[131025530020] |Installing Grub into MBR breaks this.
[131025530030] |With recent thinkpads you can install grub into the first sectors of the partition holding /boot and set this active, then you normally enter grub when booting but thinkvantage this works.
[131025530040] |I'm not 100% sure this already works with T400 but I think it does.
[131025530050] |See this thinkwiki page for more information.
[131025540010] |I have Ubuntu 10.04 installed on a T410 and all drivers were in the distro.
[131025540020] |I did make sure to get the model with Intel video and wireless chips though, as these are well supported.
[131025550010] |Can't map XF86AudioRaiseVolume key in QJoyPad
[131025550020] |I wanted to map my PSone gamepad to do some basic KDE/keyboard/mouse functions and everything works well except one thing.
[131025550030] |I use logitech ultra-x keyboard which has a few multimedia buttons (play, home, volumeup, volumedown..) and I wanted to map volumeup and volumedown keys to my pad with everything else.
[131025550040] |volumedown works well, but volumeup makes some trouble. after I start qjoypad and map it - it does work but after mapping it I still see the "[NO KEY]" (while it is doing its job - volumeup; that's strange).. after I turn qjoypad off and on I get this error: "Error reading button 6" and after that "Error reading definition for joystick 0"
[131025550050] |this is my layout:
[131025550060] |with xev I got it what 122 and 123 are (they work normaly under KDE when I press them, it's just that 123 wont map and stay mapped for my pad)
[131025550070] |any hint on what I should do to fix this and make my configuration stay? if it means anything, or helps anyhow - system is arch with 2.6.35 kernel, kde 4.5.1 if you need any additional info I can provide it
[131025550080] |thank you very much =)
[131025560010] |I don't know a solution, but I know a workaround.
[131025560020] |Use xmodmap to map 123 to volumeup.
[131025560030] |Here is the man page: http://www.xfree86.org/4.2.0/xmodmap.1.html
[131025560040] |I used xmodmap in the past to map the different, unmapped media buttons on my M$ keyboard.
[131025570010] |How to remove "You have mail" welcome message
[131025570020] |When I open up my terminal it says "you have mail", anyone has any idea of why?
[131025570030] |I am running OS X, but since it too is based on Unix and relies on files such as bashrc, bash_profile etc.
[131025570040] |I thought somebody here might know, and I'm not sure it's a platform specific problem!
[131025580010] |It sounds like something has sent mail on (and to) the machine using the local mail exchanger.
[131025580020] |Most likely the email is an automated message from some installed package.
[131025580030] |Once you log in, type mail
on the terminal to read and (presumably) delete the relevant mail.
[131025580040] |(Inside mail
, use ?
to find out what the commands are.)
[131025580050] |Once you've read or deleted any unread mail, you won't see the "You have mail" message again until/unless something else sends mail in the same way.
[131025580060] |Odds are once you know what's sending you the mail, you can find a configuration option to change where it sends it to.
[131025590010] |It is a "platform specific problem" :-)
[131025590020] |Mac OS X uses as default shell the BASH like many other Un*x/Linux systems.
[131025590030] |Mail from your system is stored in mailbox for your user, often located in /var/mail/$user
[131025590040] |Because sometimes your mailbox will contain important messages (a failed cronjob task, etc.) you don't to check it every time by yourself.
[131025590050] |The BASH (and many other shells) have a feature called mailcheck, which outputs your message the first time you open your terminal.
[131025590060] |$MAILCHECK is an environment variable which contains the seconds when its time to check for new mail.
[131025590070] |Mail is also checked before displaying the primary prompt.
[131025590080] |See more in man bash
- Bash Variables.
[131025590090] |To prevent the checks for new mail, you can simply unset $MAILCHECK.
[131025590100] |The right place for this is your .bashrc file, which will be read when you open up a shell.
[131025590110] |Its in your user home directory.
[131025590120] |Append to ~/.bashrc:
[131025590130] |to disable the message.
[131025590140] |Of course it is also possible to simply delete the mails like T.J. Crowder mentioned :-)
[131025600010] |Is there a basic tutorial for grep, awk and sed?
[131025600020] |I've been a Linux user for a while, and I've a pretty decent understanding of most the common command line utilities.
[131025600030] |However, ones that come up and up again in relation to programming are grep
, awk
, and sed
.
[131025600040] |About the only thing I've used grep for is piping stuff into it to find files in log files, the output of ps
etc.
[131025600050] |I haven't used awk
or sed
at all.
[131025600060] |Are there any good tutorials for these utilities?
[131025610010] |In my opinion, awk is more or less obsolete (however, others will strongly argue this opinion), most people I know nowadays instead use some script language like perl or today often python for advanced text manipulation. sed is great for simpler text manipulation tasks, I use it all the time.
[131025610020] |I learned it mainly by looking at sed one-liners like those at http://sed.sourceforge.net/sed1line.txt.
[131025610030] |Regarding grep: Well you basically want to get a solid understanding of regular expressions (also needed for sed).
[131025610040] |I here just used the texinfo manuals.
[131025620010] |AWK is particularly well suited for tabular data and has a lower learning curve than some alternatives.
[131025620020] |AWK: A Tutorial and Introduction
[131025620030] |An AWK Primer
[131025620040] |RegularExpressions.info
[131025620050] |sed tutorial (with links to more)
[131025620060] |grep tutorial
[131025620070] |info sed
, info grep
and info awk
or info gawk
[131025630010] |The O'Reilly sed and awk book is great for er sed and awk.
[131025640010] |If you are to learn one out of these 3( grep , sed and awk ) , you can just learn awk/gawk.. awk can do grep and sed's functions, ie using regex to search/replace text, plus much more because its also a programming language.
[131025640020] |If you learn the inside outs of gawk/awk, you won't need to use grep/sed/wc/cut etc.
[131025640030] |Just one tool does it.
[131025650010] |The Regular Expressions Cookbook published by O'Rielly would be enough to get you anywhere in any language that uses them.
[131025660010] |The authors of the book are Kernigan and Pike the title is something like "The Unix Programming Environment".
[131025660020] |The book that I actually learned from was called "An Introduction to Berkely Unix".
[131025670010] |apt-get or aptitude
[131025670020] |Possible Duplicate: What is the real difference between “apt-get” and “aptitude”?
[131025670030] |(How about “wajig”?)
[131025670040] |I'm new in Linux (Ubuntu) world.
[131025670050] |When I need to install something I Google for it and frequently I found some command like apt-get install [package name]
[131025670060] |My friend told me that aptitude provides more functionality and it's better to use it instead.
[131025670070] |But as I know aptitude will be removed from next version of Ubuntu (yes, it will be possible to install it, but it will be excluded by default).
[131025670080] |What better to use: apt-get or aptitude?
[131025680010] |how to view a directory's permission
[131025680020] |What is the command with which you can directly view the permission bits of a directory?
[131025690010] |There's a couple ways. stat
is used to show information about files and directories, so it's probably the best way.
[131025690020] |It takes a format parameter to control what it outputs; %a
will show the octal values for the permissions, while %A
will show the human-readable form:
[131025690030] |Another (probably more common) way is to use ls
. -l
will make it use the long listing format (whose first entry is the human-readable form of the permissions), and -d
will make it show the entry for the specified directory instead of its contents:
[131025700010] |How to rip scratched audio cds?
[131025700020] |What is currently the best way to rip scratched audio cds under Linux?
[131025700030] |What I find complicated, is that there are several tools available but it is not clear if one tool has better error correction features than the other.
[131025700040] |I mean, there are at least:
[131025700050] |cdparanoia
[131025700060] |cdda2waw
[131025700070] |cdrdao
[131025710010] |I use the windows freeware exactaudiocopy under wine, which has its emphasis exactly on error correction.
[131025710020] |This works very nicely (see its entry on winehq).
[131025710030] |If you want to stay natively you might want to have a look at rubyripper.
[131025720010] |cdparanoia is designed specifically to handle damaged media, with a wide variety of strategies ranging from re-reading sectors to interpolating lost data using FFT.
[131025730010] |resume transfer of a single file by rsync
[131025730020] |In Ubuntu, I want to copy a big file from my harddrive to a removable drive by rsync.
[131025730030] |For some other reason, the operation cannot complete in a single run.
[131025730040] |So I am trying to figure out how to use rsync to resume file copying from where was left last time.
[131025730050] |I have tried use the option --partial or --inplace, but together with --progress, I found rsync with --partial or --inplace actually starts from the beginning instead of from what was left last time.
[131025730060] |Mannually early stopping rsync and checking the size of the received file also confirm what I found.
[131025730070] |But with --append, rsync starts from what was left last time.
[131025730080] |I am confused as I saw on the manpage --partial, --inplace or --append seem to relate to resuming copying from what was left last time.
[131025730090] |Is someone able to explain their difference?
[131025730100] |Why --partial or --inplace do not work for resuming copying?
[131025730110] |Is it true that for resuming copying, rsync has to work with option --append?
[131025730120] |Also if a partial file was left by mv or cp not by rsync, will rsync --append correctly resume the file copying?
[131025730130] |Thanks and regards!
[131025740010] |To resume an interrupted copy, you should use rsync --append
.
[131025740020] |From the man page's explanation of --append
:
[131025740030] |This causes rsync to update a file by appending data onto the end of the file, which presumes that the data that already exists on the receiving side is identical with the start of the file on the sending side. [...]
[131025740040] |Implies --inplace
, [...]
[131025740050] |Option --inplace
makes rsync
(over)write the destination file contents directly; without --inplace
, rsync
would:
[131025740060] |create a new file with a temporary name,
[131025740070] |copy updated content into it,
[131025740080] |swap it with the destination file, and finally
[131025740090] |delete the old copy of the destination file.
[131025740100] |The normal mode of operation mainly prevents conflicts with applications that might have the destination file open, and a few other mishaps which are duly listed in the rsync
manpage.
[131025740110] |Note that, if a copy/update operation fails in steps 1.-3. above, rsync
will delete the temporary destination file; the --partial
option disables this behavior and rsync
will leave partially-transferred temporary files on the destination filesystem.
[131025740120] |Thus, resuming an a single file copy operation will not gain much unless you called the first rsync
with --partial
or --partial-dir
(same effect as --partial
, in addition instructs rsync
to create all temporary files in a specific directory).
[131025750010] |Names for ATA and SATA disks in Linux
[131025750020] |Assume that we have two disks, one master SATA and one master ATA.
[131025750030] |How will they show up in /dev?
[131025760010] |If I'm understanding your question correctly, the first parallel ATA hard drive under Linux will be /dev/hda
, the second will be /dev/hdb
, followed by /dev/hdc
, etc.
[131025760020] |Serial ATA devides will show up the same way SCSI and USB devices do: /dev/sda
will be the first one, followed by /dev/sdb
, /dev/sdc/
, etc.
[131025770010] |Depending on your SATA driver and your distribution's configuration, they might show up as /dev/hda
and /dev/hdb
, or /dev/hda
and /dev/sda
, or /dev/sda
and /dev/sdb
.
[131025770020] |Distributions and drivers are moving towards having everything hard disk called sd?
, but PATA drivers traditionally used hd?
and a few SATA drivers also did.
[131025770030] |The device names are determined by the udev
configuration.
[131025770040] |For example, on Ubuntu 10.04, the following lines from /lib/udev/rules.d/60-persistent-storage.rules
make all ATA hard disks appear as /dev/sd*
and all ATA CD drives appear as /dev/sr*
:
[131025780010] |Why does my mapping of to :bY not work in gvim?
[131025780020] |In my .vimrc
file, I have the following two lines:
[131025780030] |but they do not work!
[131025780040] |In insert mode, typing
[131025780050] |returns
[131025780060] |(literally) and similarly so for
and if I try to execute the command I get
[131025780070] |(plus a newline).
[131025780080] |In normal mode there's no effect.
[131025780090] |Thus, I know that the key map isn't being gobbled up by X
or my shell but what else could be the problem?
[131025780100] |Other key maps work fine.
[131025790010] |Could it be, because this keys are already used by vim for switching tab pages?
[131025790020] |See the description in vims documentation
[131025800010] |The n in :noremap is for normal mode, so it won't work in insert mode.
[131025800020] |If you want it to work in insert mode, use :imap.
[131025800030] |If you were trying to use to make it work in insert mode, I think you were looking for .
[131025810010] |Solved!
[131025810020] |In insert mode, anything on the RHS of the key mapping is inserted literally so we need to use
which enters normal mode for a single command.
[131025810030] |The
prevents the cursor from moving leftward which occurs when invoking
at the end of a line.
[131025810040] |Also note that you can't have a space between
and :bnext
else it will be eaten by
.
[131025820010] |Connect Windows printer to Linux machine via samba
[131025820020] |How can I connect my Windows printer to a Linux (RHEL5) box via samba?
[131025820030] |I'm looking for a clear configuration file
[131025830010] |You have to configure CUPS.
[131025830020] |Open the CUPS web interface at https://localhost:631 and click on "Add Printer" (don't be scared if it takes a while, it's scanning the network for network printers).
[131025830030] |If it does not automatically list your printer you can choose "Windows Printer via SAMBA".
[131025830040] |As URL enter:
[131025830050] |smb://[workgroup/]server[:port]/printer (workgroup and port are optional).
[131025830060] |Next you just chose the right model and the printer should work.
[131025840010] |How to create a file even root user can't delete it
[131025840020] |How to create a file even root user can't delete it ?
[131025850010] |Simple answer: You can't, root can do everything.
[131025850020] |You can set the "i" attribute with chattr (at least if you are on ext{2,3,4}) which makes a file unchangeable but root can just unset the attribute and delete the file anyways.
[131025850030] |More complex (and ugly hackish workaround): Put the directory you want unchangeable for root on remote server and mount it via NFS or SMB.
[131025850040] |If the server does not offer write permissions that locks out the local root account.
[131025850050] |Of course the local root account could just copy the files over locally, unmount the remote stuff, put the copy in place and change that.
[131025850060] |You cannot lock out root from deleting your files.
[131025850070] |If you cannot trust your root to keep files intact, you are having a social problem, not a technical one.
[131025860010] |Put it on a CD-rom! ;)
[131025860020] |(this should be a comment but I can't do comments...)
[131025870010] |Put your file on a readonly filesystem.
[131025870020] |This may be a remote FS where root is no longer root, a CD-ROM or a removable device that can be write-protected, e.g., a SD card.
[131025880010] |Long ago I wrote a kernel patch (to 2.2.ancient) where attempting to delete a file called SELFDESTRUCT killed the calling process.
[131025880020] |This guarded against accidental "rm -rf *".
[131025890010] |timestamp, modification time, and created time of a file
[131025890020] |I just know that ls -t, -f give different sorting of files and subdirectories under a directory.
[131025890030] |Now I wonder what are the differences between timestamp, modification time, and created time of a file?
[131025890040] |How to get and change these kinds of information by commands?
[131025890050] |In terms of what kind of information do people say a file is "newer" than the other?
[131025890060] |What kinds of information's change will not make the file different?
[131025890070] |For example, I saw someone wrote:
[131025890080] |By default, the rsync program only looks to see if the files are different in size and timestamp.
[131025890090] |It doesn't care which file is newer, if it is different, it gets overwritten.
[131025890100] |You can pass the '--update' flag to rsync which will cause it to skip files on the destination if they are newer than the file on the source, but only so long as they are the same type of file.
[131025890110] |What this means is that if, for example, the source file is a regular file and the destination is a symlink, the destination file will be overwritten, regardless of timestamp.
[131025890120] |On a side note, does the file type here mean only regular file and simlink, not the type such as pdf, jpg, htm, txt etc?
[131025890130] |Thanks and regards!
[131025900010] |There are 3 kind of "timestamps":
[131025900020] |Access - the last time the file was read
[131025900030] |Modify - the last time the file was modified (content has been modified)
[131025900040] |Change - the last time meta data of the file was changed (e.g. permissions)
[131025900050] |To display this information, you can use stat
which is part of the coreutils.
[131025900060] |stat
will show you also some more information like the device, inodes, links, etc.
[131025900070] |Remember that this sort of information depends highly on the filesystem and mount options.
[131025900080] |For example if you mount a partition with the noatime
option, no access information will be written.
[131025900090] |A utility to change the timestamps would be touch
.
[131025900100] |There are some arguments to decide which timestamp to change (e.g. -a for access time, -m for modification time, etc.) and to influence the parsing of a new given timestamp.
[131025900110] |See man touch
for more details.
[131025900120] |touch
can become handy in combination with cp -u
("copy only when the SOURCE file is newer than the destination file or when the destination file is missing") or for the creation of empty marker files.
[131025910010] |archive files and directories before transfer
[131025910020] |I was wondering what is the advantage of making all the files and diretories into a archive file for transfer, such as by cpio, instead of transferring them directly, such as by cp, scp?
[131025910030] |Thanks and regards!
[131025920010] |I do this because its often easier to handle one big file, then thousands of small chunks.
[131025920020] |I don't have to calculate checksums for every file, most of the time it is enough to just create a checksum for one archive.
[131025920030] |Also it is easier for me to preserve file permissions.
[131025920040] |These are just some of my reasons.
[131025930010] |There are a few reasons you might want to tar up a bunch of files before transfer:
[131025930020] |Compression: You will get better compression by compressing one large file rather than many small files.
[131025930030] |At least scp can compress on the fly, but is on a file by file basis.
[131025930040] |Connections: At least with scp, it makes a new connection for each file it transfers.
[131025930050] |This can greatly slow down the throughput if you are transferring many small files.
[131025930060] |Restart: If your transfer protocol allows restarting a transfer in the middle, it might be easier than figuring out which file was in progress when the transfer was interrupted.
[131025930070] |Permissions: Most archiving programs let you retain the ownership and permissions of files, which may not be supported in your transfer program.
[131025930080] |File location: If the destination location is at the end of a long path, or hasn't been decided, it might be useful to transfer an archive to the destination and decide latter where the files should go.
[131025930090] |Integrity: Its easier to compute and check the checksum for a single archive file than for each file individually.
[131025940010] |If using rsync is possible, the advantage of archiving first is almost gone.
[131025940020] |Compression: rsync has built-in compression (option -z).
[131025940030] |Connection: The number of connection (generally 1) doesn't increase with the number of files transfered.
[131025940040] |Restart: rsync only transmit the difference, so incremental transfer is simple.
[131025940050] |You can choose to keep partial files so it is faster to resume the transmission.
[131025940060] |Integrity: Check-sum is used internally, and if you are paranoid about it, you can force the -c option, so it is checked again.
[131025950010] |Connect to KVM instance using virsh when image launched through Eucalyptus?
[131025950020] |I'm using Eucalyptus on Ubuntu 10.04 to set up a private cloud.
[131025950030] |Sometimes I'm not able to ssh into the VM instances, and I'd like to be able to connect directly to the console of the VM instance.
[131025950040] |However, by default, that doesn't seem to work through virsh:
[131025950050] |Is there some way to enable this, for example, by changing the way Eucalyptus generates the XML file that gets passed to libvirt?
[131025950060] |Here's the libvirt.xml file that Eucalyptus generates:
[131025950070] |Here's the output of virsh dumpxml:
[131025950080] |Here's the full KVM command-line that ends up being invoked:
[131025950090] |Note: Cross-posted from serverfault since question migration isn't supported yet.
[131025960010] |I figured it out...
[131025960020] |You need to edit the /usr/share/eucalyptus/gen_kvm_libvirt_xml file.
[131025960030] |Here's the diff:
[131025960040] |(Also answered on serverfault).
[131025970010] |Why is there a mismatch between size reported by LVM and the size reported by df -h?
[131025970020] |I'm new to LVM and have been very confused by this:
[131025970030] |I am transfering a large file to a partition that I thought had about 1.5 terabytes of space on it.
[131025970040] |Near the end of the transfer, rsync exits with an error claiming that the partition is full.
[131025970050] |I investigate and find the following:
[131025970060] |This seems to imply that /var (the partition that I'm transfering to, has the amount of storage I expect.
[131025970070] |However, then I see:
[131025970080] |I'm guessing this has something to do with the volume being resized at some point.
[131025970090] |While I have reliable backups, I'd rather not interrupt services for the time it will take to get the backup and restore.
[131025970100] |Thus, is there anyway to make the filesystem seen by the OS match the space available according to lvm without losing data?
[131025980010] |If this is an ext3 filesystem, you can extend it to the LV size by running:
[131025980020] |If this anything else than ext3, use the appropriate tool, e.g. xfs_growfs /var
if it's XFS.
[131025980030] |This is absolutely nothing to be afraid of.
[131025980040] |I have extended hundreds of filesystems in more than 10 years on several operating systems and I have never seen the operation leading to a disruption of any kind.
[131025990010] |Command line join of password protected wireless networks in Ubuntu, Arch Linux, or other distros?
[131025990020] |Is there a way to join a password protected wireless network via the command line in installations of Ubuntu or Arch Linux with no desktop environment or any GUI installed?
[131025990030] |Also, could this be done during a text-based installation and set up of those systems?
[131025990040] |Just for example, this might be important when installing (without a GUI) from a live CD where many packages need to be downloaded, but only a password protected wireless network is available.
[131025990050] |Thanks.
[131026000010] |The package you are looking for is called wpa_supplicant
, it handles logging into protected wireless networks.
[131026000020] |If you use it from Ubuntu (or other debian based distributions) it's fairly easy to set up and the process is rather simple (check the debian wiki for a few pointers).
[131026000030] |I don't know much about arch linux but it shouldn't differ too much.
[131026000040] |If you still want the convenience you know from desktop environments or are for some other reason tied to network-manager
you can use the cnetworkmanager
(website) package that allows you to talk to the network-manager daemon from a terminal.
[131026010010] |As tante said, use wpa_supplicant
.
[131026010020] |Here is the ArchLinux wiki on it.
[131026010030] |have fun :)
[131026020010] |wicd-curse is good for you if you installed wicd!
[131026030010] |filesystem for archiving
[131026030020] |I have some complex read-only data in my file system.
[131026030030] |It contains thousands of snapshots of certain revisions of a svn repository, and the output of regression tests.
[131026030040] |Identical files between snapshots are already de-duplicated using hard links.
[131026030050] |This way, the storage capacity doesn't need to be large, but it still consumes a lot of inodes, and this makes fsck painfully long for my main file system.
[131026030060] |I'd like to move these data to another file system, so that it doesn't affect the main file system too much.
[131026030070] |Do you have suggestions?
[131026030080] |Squashfs seems to be a possible choice, but I'll have to check if it can handle hard links efficiently.
[131026030090] |Thanks and regards.
[131026040010] |I would prefer XFS since I have very good experiences with this file system.
[131026040020] |But I really recommend, you make a test with your data and all filesystems suggested.
[131026050010] |If it's abot fsck slowness, did you try ext4?
[131026050020] |They added a few features to it that make fsck really quick by not looking at unused inodes:
[131026050030] |Fsck is a very slow operation, especially the first step: checking all the inodes in the file system.
[131026050040] |In Ext4, at the end of each group's inode table will be stored a list of unused inodes (with a checksum, for safety), so fsck will not check those inodes.
[131026050050] |The result is that total fsck time improves from 2 to 20 times, depending on the number of used inodes (http://kerneltrap.org/Linux/Improving_fsck_Speeds_in_Ext4).
[131026050060] |It must be noticed that it's fsck, and not Ext4, who will build the list of unused inodes.
[131026050070] |This means that you must run fsck to get the list of unused inodes built, and only the next fsck run will be faster (you need to pass a fsck in order to convert a Ext3 filesystem to Ext4 anyway).
[131026050080] |There's also a feature that takes part in this fsck speed up - "flexible block groups" - that also speeds up filesystem operations.
[131026060010] |Btrfs has native support for snapshots, so you wouldn't have to use hard links for deduplication.
[131026060020] |You could recreate your current setup by creating a btrfs filesystem and loading it with the earliest revision that you need, and taking a snapshot, and then revving the repository forward to each point in time that you need a snapshot of and taking a snapshot at each step.
[131026060030] |This should be more efficient than hard links, and simpler to set up as well.
[131026060040] |I also think (though I'm far from sure of this) that squashfs deduplicates files transparently, so even if it doesn't handle hard links, you'd still see benefits.
[131026060050] |If you never need to change the data in the filesystem, then squashfs is probably the way to go, since fsck could then be replaced by md5sum ;)
[131026070010] |I have read about several shops which use a DataDomain for exactly that purpose.
[131026070020] |Your archival script can be very simple (tar or rsync and cron, for example), and you don't need to worry about anything managing hard links.
[131026070030] |No need for incremental copies, except to conserve bandwidth.
[131026070040] |All the magic happens underneath at the block layer.
[131026070050] |It's not unusual to host 15-20TB worth of virtual data while only using 1-2TB worth of real disk space.
[131026070060] |You'll still have plenty left over for your disk backups.
[131026070070] |The data would be served over NFS, but I'm not sure if that is a problem
[131026070080] |When FreeBSD gets ZFS v23, deduplication will be available for the rest of us.
[131026080010] |Substitute part of text file using bash script
[131026080020] |I'm writing a shell script (bash) to fetch and build several bits of software.
[131026080030] |The script also writes several small config files and needs to alter a couple of pre-existing config files.
[131026080040] |What is the best way to find and substitute a few lines of a text file from a bash script?
[131026080050] |This sounds like a job for sed
, but I don't understand the syntax...
[131026090010] |Not an answer, but: http://catb.org/esr/writings/unix-koans/shell-tools.html
[131026100010] |awk/sed/bash/Python/Perl/Ruby and most other tools/programming languages all can do manipulation of files.
[131026100020] |The "best" way is the way you are familiar and comfortable with.
[131026100030] |If you don't know anything about sed, look it up and learn about it.
[131026100040] |Otherwise, if you have a programming language you know, just do with it.
[131026100050] |Here's a bash script example
[131026100060] |sed example
[131026100070] |awk example
[131026100080] |Python example (use 'with' for later versions)
[131026110010] |sed s/@var@/$VALUE/g config
, but beware of stray slashes /
in $VALUE
-- you might need to escape them or use another separator char.
[131026120010] |What should I expect if I switch from Ubuntu to openSuse
[131026120020] |As probably most of you I have been long using Ubuntu.
[131026120030] |I'm not an expert, but I have been using different distros until I settled with Ubuntu.
[131026120040] |I started using SuSE 5.x, Conectiva (that later became Mandriva, so it seems), RedHat, Mac OS X (yeah, I know, not Linux) and Ubuntu running mostly as a VM in the last couple of years.
[131026120050] |But ever since SUSE released the SUSE Studio I was tempted to switch back to it.
[131026120060] |It is way too convenient to keep your installation in the cloud and download your system ready to go.
[131026120070] |Here is my question.
[131026120080] |What to expect from the switch.
[131026120090] |I know that SUSE uses RPM as its package manager, and I have no idea of the completeness of its repository compared to Ubuntu.
[131026120100] |When trying openSUSE on a VM I also miss the sudo command, but I am sure that it must have been some lack of configuration on my part.
[131026120110] |So, what else would be different?
[131026120120] |My main use for Linux is as a desktop and a bit of Java and Ruby programming.
[131026130010] |I have use openSuse for several years and have dabbled in Ubuntu and other distributions.
[131026130020] |What to expect:
[131026130030] |Centralised configuration is possible using Yast.
[131026130040] |You may or may not like this - it seems to generate quite strong opinions in a lot of people but I don't care about it much.
[131026130050] |Different desktops which work.
[131026130060] |The openSuse DVD includes several desktops, and each one seems to work properly.
[131026130070] |I have seen people having problems about programs which work in Ubuntu but not in Kubuntu etc.
[131026130080] |This may be relevant if you are using virtual machines over the could and want a lighter desktop.
[131026130090] |Sudo works differently (as you seem to have noticed).
[131026130100] |The most obvious point is that root has a password in openSuse and you use that rather than the user password (although the root password is usually the same as the first user).
[131026130110] |A less obvious point is that the path (or permissions or something?) is not changed to be root's rather than the user's.
[131026130120] |(If you want to run ifconfig
for example you have to su
then ifconfig
rather than sudo ifconfig
.)
[131026130130] |There seems to be less stuff in the reositories, but there is everything I want - so I don't know what isn't there.
[131026130140] |Perhaps there are only 50 text editors rather than 100.
[131026140010] |The package repositories used by zypper are very complete and there are a number of extra channels you can add easily.
[131026140020] |Checkout http://software.opensuse.org for more community built packages and repositories.
[131026150010] |I love openSUSE, but have recently settled, hopefully temporarily back on Ubuntu as my desktop system.
[131026150020] |I did personally find openSUSE to be a little more buggy, though that could well be just a matter of bad luck.
[131026150030] |It also didn't support my webcam (an MS Lifecam) which Ubuntu does, but again, who knows.
[131026150040] |sudo can be set up to work however you want, but yes, by default it's a little different.
[131026150050] |Might be worth taking your config file with you for reference.
[131026150060] |In the default setup openSUSE has Novell's "SLAB", their replacement for the standard GNOME menu, but both the standard GNOME menu and the one Ubuntu/Fedora use is available.
[131026160010] |SuSE seem to install apps in a different directory structure.
[131026160020] |When I search online for answers, many good solutions are written for other distros.
[131026160030] |As long as you know where YAST installs the apps and how it sets up the configuration, it should not be a problem.
[131026160040] |I do not use YAST for configuring the apps as it tends to be less optimal than I would like.
[131026160050] |I like the YAST interface for installing and updating apps.
[131026170010] |While SUSEStudio can be used to build your own distro, if you so choose, I think it has a slightly different end goal.
[131026170020] |Novell has more of the notion of building applications, that include an OS and everything ready to go, and are patchable/maintainable.
[131026170030] |For example, if you have some application that is reasonably tricky to install on arbitrary Linux variants, it would be pretty straightforward to build a SUSE Studio image, configure your application once, and for anyone wanting to use your application, offer it as an ISO/VM/Appliance style thing.
[131026170040] |This makes more sense in a commercial space than in the free space, but it does have a place.
[131026180010] |Linux as virtualisation host and client performance under Core i7?
[131026180020] |(to skip the details, jump to last paragraph)
[131026180030] |A few months ago I built what I thought was a beefed up box with an AMD Phenom II X4 955, 8 GB DDR3 1333 mHz RAM plus a nice motherboard (can't remember exact specs).
[131026180040] |However I couldn't get a VirtualBox Windows XP guest machine to perform well in it, and later read that AMD CPUs don't work so well in general (???).
[131026180050] |Anyway, I have been asked to configure and buy a new notebook for a family member, who needs to run a few legacy Windows apps but really wants to use Ubuntu as the main OS to try to squeeze good multimedia performance out of it.
[131026180060] |Right now I am looking at AVADirect Clevo W860CU with NVIDIA® GeForce® GTX 460M, 8 GB DDR3 1333mHz RAM, and a 7200-rpm hard drive.
[131026180070] |Due to past experience, I am not confident about virtualisation.
[131026180080] |One question is: Would a quad core i7 (say 840QM) with slower clockspeed perform better (or worse) than a dual core i7 (say 620M) with higher clockspeed for virtualised guests under Linux?
[131026180090] |Or should I just tell the person to go with a different host OS?
[131026190010] |This mainly depends on which software is run under Windows.
[131026190020] |VirtualBox can offer all cores to Windows, but if the applications run there only use one or two, this is of no help and the higher clocked dual-core might be faster.
[131026190030] |(If it is really faster clocked than the quad core running turbo boost)
[131026200010] |I have a celeron procesor 430, 2GiB Ram.
[131026200020] |Under VirtualBox I use VisualStudio and everything works fine.
[131026210010] |If you absolutely need virtualized performance, I've heard that VMWare does better than VBox, and I'm pretty sure it also supports graphics acceleration for Windows guests on Linux hosts.
[131026210020] |This might solve your performance problems with the XP guest.
[131026220010] |Recompile Kernel to Change Stack Size
[131026220020] |I need to recompile my kernel on RHEL WS5 with only two changes.
[131026220030] |Change stack size from 4k to 8k
[131026220040] |Limit usable memory to 4096.
[131026220050] |How do I recompile the kernel without changing anything else but these two items?
[131026230010] |I'm no expert for RHEL WS5 but for Cent OS 5, which is basically RHEL with all references to Redhat removed, there is a nice tutorial at centos.org which explains how to build a modified version of the distribution kernel.
[131026230020] |The procedure explained there will probably work for RHEL WS, too.
[131026240010] |To change only the new values you will need the config the old kernel was build from.
[131026240020] |In RHEL you can find this in: /boot/config-$(\uname -r)
[131026240030] |Copy this file to the kernel source and change the values you want.
[131026240040] |Use make menuconfig
for a ncurses gui.
[131026240050] |For other distributions: If the config option CONFIG_IKCONFIG_PROC
was set, your kernel configuration is available under /proc/config.gz
[131026250010] |Using different versions of Python
[131026250020] |Background
[131026250030] |Since I develop python programs that must run on different python versions, I have installed different versions of python on my computer.
[131026250040] |I am using FC 13 so it came with python 2.6 pre-installed in /usr/bin/python2.6
and /usr/lib/python2.6
.
[131026250050] |I installed python 2.5 from source, and to keep things neat, I used the --prefix=/usr
option, which installed python in /usr/bin/python2.5
and /usr/lib/python2.5
.
[131026250060] |Now, when I run python
my prompt shows I am using version 2.5.
[131026250070] |However, I am having some issues with the install.
[131026250080] |Package management.
[131026250090] |Using easy_install, packages are always installed in /usr/lib/python2.6/site-packages/
.
[131026250100] |I downloaded setuptools
.egg for python 2.5 and tried to install it, but it gives me an error: "/usr/lib/python2.5/site-packages does NOT support .pth files"
[131026250110] |It seems that python2.5 is not in my PYTHONPATH.
[131026250120] |I thought the default install would add itself to the PYTHONPATH, but when I write echo $PYTHONPATH
at promt, I just receive an empty line.
[131026260010] |I'm also using Fedora 13 also and PYTHONPATH is not set.
[131026260020] |Within python, sys.path will give you a list of the paths used for importing scripts.
[131026260030] |I'm not familiar with how easy_install decides its destination directory but I'm sure there would be a command line argument you could give it.
[131026260040] |Try specifying which python version to run easy_install under by preceding your command with the full path to the python you want.
[131026260050] |Also check if easy_install is a symlink in bin to a script within one python version you have installed.
[131026270010] |This sounds like a perfect application for virtualenv, a very popular tool for creating isolated Python environments.
[131026270020] |This is a sample command to specify the version of Python
[131026280010] |Which run dialog
[131026280020] |I just switched from the standard Gnome window manager to openbox (still running inside Gnome) and like it a lot.
[131026280030] |However, now I need a new run dialog, e.g. the thing popping up when hitting Alt+F2 in Gnome.
[131026280040] |I see in the openbox wiki, I can use the one from gnome with "gnome-panel-control --run-dialog" but maybe some one can recommend a better program for this?
[131026290010] |Personally I use gnome-do
for that kind of stuff.
[131026290020] |Yeah it's mono and some people don't like that, but if you enter a command. it runs it and when it's about running GUI applications it's a really quick way to trigger them.
[131026290030] |Since gnome-do has so many plugins, many of the actions I'd usually run via alt+F2 (like quickly mounting something) I can just do via gnome-do: I type "mo" and it already knows that I probably want to mount something and offers me the filesystems I have defined that I have not yet mounted (just as an example).
[131026290040] |If you don't like mono there is an app called "kupfer" which does similar things written in Python, it just doesn't have all the features gnome-do has.
[131026290050] |When I want to run "real" shell commands I tend to just open a terminal.
[131026300010] |There's probably hundreds of equally valid answers for this, but I use gmrun:
[131026300020] |It has miscellaneous useful features:
[131026300030] |You can run a command in a terminal using Ctrl+Enter
[131026300040] |It keeps a history of commands, so you can just keep hitting Up to cycle through them, or search through them with the standard shell mechanisms, Ctrl+R and !.
[131026300050] |It also has Tab-completion:
[131026300060] |It will let you run a file directly (it knows what program to execute for that particular file type):
[131026310010] |I had some success using bashrun, its simple, with many features and very costumizable.
[131026310020] |a few screenshots:
[131026320010] |I love dmenu.
[131026320020] |It's fast: instantaneous, in fact.
[131026330010] |Installing Ubuntu, how do I get it to recognize the Crucial RealSSD C300?
[131026330020] |I'm building a new rig and got the RealSSD C300 for its supposedly stellar performance, but it is not recognized when I try to install Ubuntu 10.4LTS 64-bit.
[131026330030] |Is there anything that I can do to get this recognized?
[131026340010] |Is it recognized by the BIOS?
[131026340020] |Does it work with a Windows Live-CD?
[131026340030] |According to e.g. this page or this blog post in German it should just work with Ubuntu 10.4.
[131026350010] |The question lies in the SATA3 controller.
[131026350020] |This forum thread answers the question.
[131026350030] |http://ubuntuforums.org/showthread.php?t=1456238
[131026350040] |In summary, in the BIOS change the SATA3 controller mode to AHCI, this should allow linux to find and use the drive.
[131026360010] |How to install OpenBSD/vax 4.7 on multiple disks?
[131026360020] |Hi, I've got an old VAXstation 3100 Model 76, and I'd like to install OpenBSD/vax 4.7 on it.
[131026360030] |I've got two drives in it, an RZ23 (104MB) and a another 1.09GB.
[131026360040] |Now, since the 1.09GB is too large for an operating system to boot from (VAXen have that "magical 1.072GB boundary"), I'd like to use the 104MB drive as /
partition, and all other partitions, including swap should go on to the other drive.
[131026360050] |But how do I do this with the OpenBSD install, since it lets me chose one disk only?
[131026360060] |I tried installing NetBSD/vax 5.0.2 beforehand, but sysinst
segfaults right after I give the OK to install the sets.
[131026360070] |The VAXstation has the both hard drives I mentioned above and 16MB RAM (which I'd like to expand some day).
[131026360080] |The machine is otherwise in perfect working order, expect the NVRAM and RTC don't work any more, I'll change them (new chip is already ordered).
[131026360090] |In case you'd advice another OS (aside from OpenVMS), you might give me hints here, too.
[131026370010] |Are you sure it only lets you use one disk?
[131026370020] |It asks you for a root disk to install the bootloader.
[131026370030] |You should select the smaller disk and select manual partitioning.
[131026370040] |Read the manual for disk label or, if you know what you are doing, the installer help.
[131026370050] |After you set this disk up, other disks should be offered for setup in a list.
[131026370060] |Enter the name of the correct disk and follow on to disklabel and add the remaining partitions in that disk.
[131026370070] |I've only ever used the emulated VAX in simh, but I doubt there is any difference as long as both disks are actually detected correctly.
[131026380010] |tunneling VNC/rdesktop over ssh
[131026380020] |I'm having a friend behind a firewall, with a windows computer.
[131026380030] |I'm having a Linux machine at home which is not behind a firewall.
[131026380040] |I want to have an rdesktop connection to his machine, without using any intermediate service such as LogMeIn
.
[131026380050] |My plan is:
[131026380060] |Have him SSH to my machine (SSH is allowed by the firewall), and set the appropriate tunnel.
[131026380070] |Activate rdesktop/vnc on my machine, on the currently ran X
server.
[131026380080] |What I don't like about it, is the hassle of running programs as his user on the currently running X
server.
[131026380090] |I'd rather have him set the tunnel somehow for my user, so that I'll just be able to rdesktop localhost:1234
as long as he's connected to me.
[131026380100] |Any smarter way?
[131026390010] |I would prefer to setup a vpn (openvpn for example) with server on your machine and client on your friend's machine.
[131026390020] |When he wants you to connect, he opens the vpn (no login involved on your machine) and you open your remote desktop client to his machine's IP (at least with openvpn, you can assign a "fixed" IP to his machine so you can save it, not needing to look at it everytime).
[131026390030] |This way you have no login to your machine and you only access his machine when he opens the VPN.
[131026390040] |On the other side, you can shutdown the server when you don't want him to connect to your machine.
[131026390050] |Anyways, if you don't give him a user on your machine (or a user with only the access you want), he won't be able to to much there.
[131026390060] |And this way, you can do it with more friends easily if needed as they only need to install the vpn client.
[131026400010] |If I'm reading the question correctly, it sounds like you've got the ssh tunneling side of things down, but you want to run programs as your own user on your friend's machine, instead of running them in his session.
[131026400020] |There are many VNC servers (tightvnc is what I use, it's pretty solid) which create a virtual desktop running in the background, which you can then connect to, instead of connecting you to the session that's currently active on your friend's computer.
[131026400030] |Once you have tightvnc installed on your friend's machine, you can run "tightvncserver :1" to start the desktop on a different display, and then forward port 5901 (5900 + the display number) to your machine.
[131026400040] |Note that the default desktop is pretty spare; you can configure what runs when tightvnc starts in the ~/.vnc/xsession script on the remote machine.
[131026410010] |Will enabling Hyper-Threading create two virtual half-speed processors?
[131026410020] |If I enable Hyper-Threading for my netbook which has an Intel Atom (1.6 GHz) will the kernel see two virtual 800 MHz processors?
[131026420010] |No, it will create two virtual 1.6 GHz processors.
[131026420020] |(However, when not under load, they will clock down to a much lower clock speed, then 800 MHz might be correct.)
[131026420030] |Do
[131026420040] |for information about them.
[131026430010] |What is "System Memory " displayed in nmon in AIX?
[131026430020] |I have a host with AIX 6.1 (6100-04-02-1007) upgraded from AIX 5.1 with Veritas Storage Foundation 5.1.
[131026430030] |Since the upgrade from AIX 5.3 to 6.1, Run vx services means a lots of memory consumed that I can not explain.
[131026430040] |This memory appears in 'nmon' as System (see below System 41.2%).
[131026430050] |If I do not start Veritas it is <20%.
[131026430060] |Whay does mean that "System" memory?
[131026430070] |How can I know what is consuming this memory?
[131026440010] |Anything like this should be in the man page
[131026440020] |In particular:
[131026450010] |Can I convert an ext3 partition into ext4 without formatting?
[131026450020] |Hi there,
[131026450030] |my home directory is on a separate partition which still uses ext3.
[131026450040] |Is there a way to convert this partition into ext4 in a non-destructive way?
[131026460010] |Yes, you can.
[131026460020] |This is explained very nicely in the ext4-wiki at kernel.org. Basically it all boils down to
[131026460030] |with /dev/DEV replaced by the partition in question.
[131026460040] |Although this should be non-destructive, I'd still strongly suggest to back up your data before doing it.
[131026470010] |What are other ways to share a tmux session between two users?
[131026470020] |I'm looking for a clean and easy way to share a tmux session with another user on the same machine.
[131026470030] |I've tried the -S socket-path
option, but it requires opening up all permissions of the socket-path before someone else can connect to the session.
[131026470040] |It works, but it's a little cumbersome.
[131026470050] |For example:
[131026470060] |This works, but both users now share the same tmux configuration (the configuration of the user who initiated the session).
[131026470070] |Is there a way to allow the two users to use their own tmux config and their own individual tmux key bindings?
[131026470080] |For bonus points, ideally, it would also be nice to give read-only access of the tmux session to other users.
[131026480010] |As far as I know, it is not possible to share sessions with other users in a "clean" way.
[131026480020] |Also read-only access is not possible, if the client doesn't use the -r
switch.
[131026480030] |As another terminal multiplexer screen supports the features you are looking for, sadly also in a cumbersome way...
[131026480040] |I'm not sure if this is an option for you, but maybe someone other will find this useful.
[131026480050] |Solution for screen:
[131026480060] |Host a session:
[131026480070] |SUID bit of screen must be set :-/
[131026480080] |Open the session with screen -S sessionname
[131026480090] |ctrla + :multiuser on
[131026480100] |ctrla + :acladd otherUsername
[131026480110] |Join a session:
[131026480120] |screen -x username/sessionname
[131026480130] |You can set permission bits for the user (* for all) with :aclchg
or :chacl
. # appended will affect windows, ? appended will affect the commands.
[131026480140] |Examples:
[131026480150] |:aclchg * -wx "#,?"
will set the session permissions to read only for all users
[131026480160] |:aclchg foo +w 2
will give write access for user foo on window 2
[131026480170] |:aclchg bar +x detach
will give the permission for detaching a session to user bar
[131026490010] |Sync a local directory with a remote directory in Linux
[131026490020] |I have a folder with a few files in it, and some space on a web server.
[131026490030] |I want to do a bi-directional sync between the local folder and the remote one in linux, just on modification time.
[131026490040] |How do I do this? btw I cannot install anything on the server, for all intents and purposes it is just space.
[131026490050] |Note: I already have rsa key-pairs set up, so that it can happen silently.
[131026500010] |If it's installed on the server, use rsync
it's build for exactly that job.
[131026500020] |To get it bi-directional do this (quote from http://forums11.itrc.hp.com/service/forums/questionanswer.do?admit=109447626+1285799008594+28353475&threadId=1278777) :
[131026500030] |To bidirectionally sync a directory /src/foo on hostA to /dest/foo on hostB, including all the sub-directories, you would run these commands on hostA:
[131026500040] |rsync -auz /src/foo hostB:/dest rsync -auz hostB:/dest/foo /src
[131026500050] |The first command pushes all the files that are newer on hostA to hostB.
[131026500060] |The second command will pull all the files that are newer on hostB to hostA.
[131026500070] |The critical options are: - when copying, you must preserve file modification times. "-a" does this and other things; if you want to preserve just the modification times, use "-t" instead. - you must skip any files that are newer on the destination: "-u" does this.
[131026510010] |Every time you need a sync between two folders rsync
is a great flexible choice.
[131026510020] |The Problem is, that rsync won't support the ftp protocol.
[131026510030] |A nice workaround for this is curlFtpFS:
[131026510040] |CurlFtpFS is a filesystem for accessing FTP hosts based on FUSE and libcurl.
[131026510050] |With curlFtpFs pretty easy to include a remote ftp folder in your filesystem.
[131026510060] |This is a short example which shows the usage of both tools:
[131026510070] |Read the manpage for further information.
[131026510080] |If you don't want such effort to install curlFtpFS and just want to keep some files in sync, there are lots of ftp tools which offer such sync features:
[131026510090] |FTPSync.pl - simple PERL script to keep a local and a remote directory in sync
[131026510100] |weex - a non-interactive FTP client for updating web pages
[131026510110] |filezilla - GUI FTP client with lots of features
[131026510120] |...
[131026520010] |The tool of choice for unidirectional synchronization is rsync, and the tool of choice for bidirectional synchronization is Unison.
[131026520020] |Both require the executable to be available on both sides.
[131026520030] |If you can make a file executable on the server side, drop the unison binary and make it executable.
[131026520040] |If you have Linux, *BSD, Solaris or Mac OS X locally, you can probably use a FUSE filesystem to make the web server space appear as a local filesystem — sshfs should work since you seem to have ssh access.
[131026520050] |Then use unison “locally”.
[131026520060] |Also note that most version control software (CVS/Subversion as well as distributed VCS) have synchronization as a by-the-way feature (check in on one machine and out on the other).
[131026530010] |Small inexpensive *nix box?
[131026530020] |I'd like to learn more about Unix &Linux and would like to set up a little home test server/ headless box.
[131026530030] |(I'm thinking of compiling from scratch, to learn how that works; either Gentoo Stage II or Linux from scratch.)
[131026530040] |I'd likely only need a little bit of storage, a USB port, and a network connection.
[131026530050] |I've heard good things about the (no longer produced) NSLU2 and overheating issues with those "plug computers" made by Marvell and others.
[131026530060] |I'd like something low-powered and physically small, which is why I don't just buy/get an old box from craigslist, though I'm willing to be convinced that it's worth finding space next the couch and a few bucks more in electricity.
[131026540010] |This doesn't answer your question directly, but did you consider playing with Linux inside a virtual machine?
[131026540020] |It's very convenient way of experimenting with different distributions.
[131026540030] |You could use VirtualBox, which has an open source version.
[131026550010] |Actually I was going to suggest VirtualBox too.
[131026550020] |Seems like the best solution, but if you want cheep hardware what about a SheevaPlug?
[131026550030] |Only $100 and it is the size of a wall wart.
[131026550040] |In fact it is a wall wart.
[131026550050] |Oops.
[131026550060] |Missed that part of your question.
[131026550070] |Ok, then how about an old computer from a recycling place like Freegeek.
[131026550080] |This one is in Portalns but there is ab ranch in many major US cities.
[131026560010] |Array Performance very similar to LinkedList - What gives!?
[131026560020] |So the title is somewhat misleading...
[131026560030] |I'll keep this simple: I'm comparing these two data structures:
[131026560040] |An array, whereby it starts at size 1, and for each subsequent addition, there is a realloc() call to expand the memory, and then append the new (malloced) element to the n-1 position.
[131026560050] |A linked list, whereby I keep track of the head, tail, and size.
[131026560060] |And addition involves mallocing for a new element and updating the tail pointer and size.
[131026560070] |Don't worry about any of the other details of these data structures.
[131026560080] |This is the only functionality I'm concerned with for this testing.
[131026560090] |In theory, the LL should be performing better.
[131026560100] |However, they're near identical in time tests involving 10, 100, 1000... up to 5,000,000 elements.
[131026560110] |My gut feeling is that the heap is large.
[131026560120] |I think the data segment defaults to 10 MB on Redhat?
[131026560130] |I could be wrong.
[131026560140] |Anyway, realloc() is first checking to see if space is available at the end of the already-allocated contiguous memory location (0-[n-1]).
[131026560150] |If the n-th position is available, there is not a relocation of the elements.
[131026560160] |Instead, realloc() just reserves the old space + the immediately following space.
[131026560170] |I'm having a hard time finding evidence of this, and I'm having a harder time proving that this array should, in practice, perform worse than the LL.
[131026560180] |Am I wrong in my assumptions??
[131026570010] |Primary vs Logical partition
[131026570020] |I know that you can only have 4 primary partitions on a hard drive.
[131026570030] |But if you're using less than 4, is there a benefit/penalty for using logical partitions?
[131026580010] |Today, it doesn't really matter if you use primary or extended partitions.
[131026580020] |There have been times where /boot had to be a primary partition but this isn't true any more.
[131026580030] |Also earlier incarnations of MS Windows required that you installed Windows to a primary partition.
[131026580040] |When I set up a new disk, I make the first partition primary and put /boot there and put the rest as logical partitions into one big extended but this is just a personal preference.
[131026590010] |who loaded my kernel modules?
[131026590020] |Hello,
[131026590030] |I have compiled a kernel with some modules, for example the ath9k
for my wireless card.
[131026590040] |I don't have anything in the autoload configuration (i.e my file /etc/modules.autoload.d/kernel-2.6
is empty).
[131026590050] |However, lsmod
still shows that the module has been loaded.
[131026590060] |I also notice that when I compile filesystem support as modules (reiserfs, jfs, xfs etc.) they also get autoloaded.
[131026590070] |Who is doing this?
[131026590080] |Can and should I disable it?
[131026590090] |I am using Gentoo.
[131026590100] |Thanks
[131026600010] |Udev loads modules automatically depending on what kind of hardware it finds.
[131026600020] |You can "blacklist" modules in order to stop them being autoloaded as described in the Gentoo udev guide.
[131026610010] |Get a list of the functions in a shared library?
[131026610020] |How can I get a list of the functions defined in a shared object library, or find out if a particular function is defined in one?
[131026620010] |Use nm with the -D (dynamic) switch:
[131026630010] |There are different executable file formats on a *nix system. a.out was a common format some years ago and today its ELF on nearby all major systems.
[131026630020] |ELF consists of a headers describing each of the files data sections.
[131026630030] |The part you are looking for is the symbol table, where each symbol (function, variable) is mapped to its address.
[131026630040] |Shared libraries keep their global symbols in a section called .dynsym
[131026630050] |What you are looking for are symbols of the type function and a global binding in this section.
[131026630060] |readelf --syms ./libfoo.so
will give you a output of the symbols.
[131026630070] |On Solaris and FreeBSD theres also elfdump
available.
[131026630080] |objdump
displays also a lot of information about your object file and you can specify a section by using the -j
switch.
[131026640010] |Debian: Which firewall for a newbie?
[131026640020] |I have to install a firewall on my server (so without X Server).
[131026640030] |It's a debian lenny.
[131026640040] |If it is possible, I want to avoid the use of iptables
.
[131026640050] |Is there a way to install/configure a firewall for a newbie ?
[131026650010] |At first, a firewall should be the last step to secure a server.
[131026650020] |Remove all software and services which are not needed, update your system with the latest available security patches and review your config files.
[131026650030] |Why do you want to avoid iptables?
[131026650040] |"Because I'm a newbie" is no real excuse.
[131026650050] |A "one click everything secure" firewall doesn't exist, and if a software product uses such a slogan, its likely to be just snakeoil software.
[131026650060] |If you are not experienced in networking basics you will have to learn this for configuring a working firewall. :-)
[131026650070] |If you don't want to create the iptable rules yourself, you have two options:
[131026650080] |customize existing scripts found on the net
[131026650090] |use a GUI tool to create the rules yourself
[131026650100] |iptables is your interface to the networking layer of the kernel.
[131026650110] |Nearly every solution for linux will depend on it.
[131026650120] |Here are some commented example scripts/tutorials.
[131026650130] |You will easily find more with a google search.
[131026650140] |Here is a list of GUI tools you can use to create your iptable rules:
[131026650150] |FirewallBuilder
[131026650160] |EasyChains
[131026650170] |FireStarter
[131026650180] |A great book about linux servers and security is "Building Secure Servers with Linux" from O'Reilly.
[131026650190] |Don't get discouraged and sorry for the "hard" words, but a server on the internet is not a toy and you will have some responsibility for this.
[131026660010] |You might consider trying ufw.
[131026660020] |While it was created for Ubuntu Server, I believe that that it is also available in Debian.
[131026660030] |(UPDATE: Unfortunately, it looks like it is only available for squeeze and sid according to packages.debian.org, but it might still be worth looking at.)
[131026660040] |While I would say that you eventually want to move to writing your own iptable rules, I initially found ufw very easy to use and very easy to transition from.
[131026660050] |Here are some highlights:
[131026660060] |Convienient Syntax: ufw allow 22
or ufw allow ssh
is all that is required to allow inbound ssh traffic if your default policy is DENY.
[131026660070] |Easy Logging: ufw logging on
will turn on fairly reasonable logging.
[131026660080] |The nice thing about the logging is that by default it drops particularly noisy services (port 137 anyone?).
[131026660090] |Ability to implement complicated policies: On my home machine I use ufw and am currently running a fairly complicated policy.
[131026660100] |Ability to add your own iptable rules.
[131026660110] |Pretty much any policy can still be implemented with ufw even if the default interface doesn't provide a mechanism because you can always add your own rules.
[131026660120] |Great Documentation: man ufw
is often all you need to solve some problem or answer some question--which is great if you are setting up your firewall when offline.
[131026660130] |This is not a "click one button and you will be secure" firewall.
[131026660140] |At the end of the day what it really does is provide an easy to use rule-creation syntax, some abstraction around iptables-save
and iptables-restore
and brings some default rules and practice that a newbie might not know about.
[131026670010] |I recommend the firehol
package.
[131026680010] |give a try at shorewall...
[131026680020] |I'm pretty happy with it and I feel it is very easy to configure whatever I need.
[131026680030] |(Including traffic shaping, NAT, DNAT and other things).
[131026690010] |How do I set up dual monitor wallpaper (Ubuntu/NVIDIA)?
[131026690020] |On Ubuntu 10.4 with NVIDIA drivers I have dual monitors setup with TwinView.
[131026690030] |How do I configure a single wallpaper to span both monitors?
[131026690040] |Right now the same wallpaper is replicated on both monitors.
[131026700010] |You can't. Lucid Lynx (your Ubuntu version) has lost this capability, and a bug has been filed.
[131026700020] |Update:
[131026700030] |Seems like there is a temporary fix already.
[131026700040] |Hope this helps:
[131026700050] |Set desired wallpaper as you usually do
[131026700060] |Execute this command in the console: gconftool-2 --set "/desktop/gnome/background/picture_options" --type string "spanned"
[131026700070] |Do the dance
[131026710010] |Update for future searchers: As of Ubuntu 10.10 (Maverick), you now have the "Span" option in the Appearance/Wallpaper Control Panel.
[131026710020] |For best results, be sure to create your wallpaper to be the exact pixels of your displays combined.
[131026710030] |E.g. for two 1280x1024 monitors, use a wallpaper with size 2560 x 1024 -- otherwise it will scale and center your wallpaper to fit.
[131026720010] |I have posted a script over in the Ubuntu forums and linux questions forums that addresses this problem (at least in my case) that uses imagemagick to resize 2 background images and stitch them together, and then change the background on a timer.
[131026720020] |Both images are random from an index file the script creates.
[131026720030] |In my case it is for Twinview where both monitors are at the same resolution.
[131026720040] |http://www.linuxquestions.org/questions/linux-desktop-74/how-do-you-have-separate-wallpapers-on-gnome-w-multimonitor-setup-694154/
[131026730010] |Sharing an X server (session) across computers
[131026730020] |I have 2 computers (both running linux) and I'm currently using synergy to use them.
[131026730030] |This is great, but now I'm wondering, is there any way (is it possible? being worked on? tried and failed? definitely not possible?) to not only share a mouse/keyboard/clipboard between the computers, but to share an X session?
[131026730040] |What I'm thinking is being able to drag X windows between monitors.
[131026730050] |I realize that this is extremely non-trivial to do and I know enough about linux (though not about xserver) that I'd like to pursue this idea even if there's nothing that does this for you.
[131026730060] |Also, I don't need to be able to just "install this and it works".
[131026730070] |I'm completely willing (and would be surprised if it didn't require) to install custom kernals, or mount partitions between machines, or whatever it takes.
[131026730080] |Since I assume it would require the same instance of xorg server running on both machines.
[131026740010] |You could look into xpra (http://code.google.com/p/partiwm/wiki/xpra) - it's not quite as smooth as you describe (no dragging between X servers) but it will give you the ability to start a program on one machine and then later detach it and reattach it on another machine.
[131026740020] |(Think of it as screen for GUI applications.)
[131026740030] |It's difficult to get tighter integration than that, because the process is still tied to the machine you start the program on, and process migration is a really hard problem.
[131026750010] |You can use a proxy X server such as DMX or XmetaX that is a single X server for clients to connect to, but which "draws" by sending the images to multiple underlying X servers running on the hardware.
[131026750020] |Moving a window from the screen on one computer to a screen on another would just change which X server the proxy is asking to draw it, not migrate the program to execute on another server.
[131026760010] |Problems with RPM (JRE)
[131026760020] |I'm trying to install a JRE on Ubuntu 10.04, but I failed:
[131026760030] |How can I work around that?
[131026760040] |Edit: I tried Riccardo's answer, but it still doesn't work:
[131026760050] |Those are only some of the errors:
[131026770010] |Since you're running Ubuntu, can't you just install a JRE that comes from Ubuntu repositories?
[131026770020] |E.g., sudo apt-get install sun-java6-jre
(which is presently at version 6.20dlj-1ubuntu3
).
[131026770030] |If you still need to go down the RPM route, you have two options:
[131026770040] |tell rpm
to ignore those dependencies (rpm --nodeps -i ...
), since all those listed are programs that are already installed on a standard Ubuntu system.
[131026770050] |(The only mismatch is gawk
, which is /usr/bin/gawk
on Ubuntu, but /bin/gawk
in the dependency output; you can either make symlink -to be on the safe side- or just ignore it and hope the JRE scripts do not hard-code the path to gawk
.)
[131026770060] |Use alien (sudo apt-get install alien
) to convert the .rpm
file into a .deb
one; this should also fix dependencies or let you correct them.
[131026780010] |Is it good to make a separate partition for /boot?
[131026780020] |I've seen some people make a separate partition for /boot
.
[131026780030] |What is the benefit of doing this?
[131026780040] |What problems might I encounter in the future by doing this?
[131026780050] |Also, except for /home
and /boot
, which partitions can be separated?
[131026780060] |Is it recommended?
[131026790010] |I think this is more of a personal preference than anything else.
[131026790020] |Might even be a best-practice.
[131026790030] |My personal view of /boot is rather read-only based.
[131026790040] |Once in a while you need to write in there to upgrade your kernel or maybe add another OS in the grubloader.
[131026790050] |Besides that it's just needed to ... well, boot.
[131026790060] |So having it in a separate filesystem might help putting it on read-only (might even be some security aspect to it as well).
[131026790070] |Should it be a separate filesystem?
[131026790080] |I guess not..
[131026790090] |But is it a bad idea?
[131026790100] |No, not at all!
[131026800010] |One reason for having a /boot partition is that it allows for things like encrypted /, where the kernel and initrd are loaded from an unencrypted partition and then used to mount the encrypted root partition containing the operating system.
[131026800020] |It shouldn't matter for general usage however.
[131026810010] |One final reason, less important than those given, is it can allow the PC to remain bootable if part of the disk is corrupted.
[131026810020] |The more partitions you have, the easier it will be to simply not mount the partition with the fault.
[131026810030] |This can be useful sometimes, but usually there's a better way anyway.
[131026810040] |EDIT: Another point: assuming Linux, using LVM can be a good way to avoid any potential problems, it makes it easy to resize "partitions" and add new space seemlessly.
[131026820010] |In answer to the 'what problems might it cause' part of the question: as with any partitioning there is always a risk that you will come to need more space than you initially allocated.
[131026820020] |While this is unlikely in the case of /boot
, there was recently an issue with pre-upgrade in Fedora caused by small /boot
sizes.
[131026830010] |This is a holdover from "ye olde tymes" when machines had trouble addressing large hard drives.
[131026830020] |The idea behind the /boot
partition was to make the partition always accessible to any machine that the drive was plugged into.
[131026830030] |If the machine could get to the start of the drive (lower cylinder numbers) then it could bootstrap the system; from there the linux kernel would be able to bypass the BIOS boot restriction and work around the problem.
[131026830040] |As modern machines have lifted that restriction, there is no longer a fixed need for /boot
to be separate, unless you require additional processing of the other partitions, such as encryption or file systems that are not natively recognized by the bootloader.
[131026830050] |Technically, you can get away with a single partition and be just fine, provided that you are not using really really old hardware (pre-1998 or so).
[131026830060] |If you do decide to use a separate partition, just be sure to give it adequate room, say 200mb of space.
[131026830070] |That will be more than enough for several kernel upgrades (which consume several megs each time).
[131026830080] |If /boot starts to fill up, remove older kernels that you don't use and adjust your bootloader to recognize this fact.
[131026840010] |The main reason for the major enterprisey distro's like Red Hat and I think Suse to use a separate /boot is that they use LVM by default and Grub cannot be used to boot from LVM.
[131026840020] |It is that simple.
[131026840030] |So if you want to use LVM, and that is a boon, you use a separate /boot.
[131026840040] |Personally, I think it is good practice to use both LVM and separate partitions for a host of things, like /var, /boot, /home and /tmp and even /usr on servers, for example in order to protect your root filesystem or data partitions from getting full.
[131026850010] |Unfortunately I'm fresh here so cannot comment the particular answers.
[131026850020] |Several people have implied that GRUB would not be able to boot from LVM
[131026850030] |GRUB2 is booting happily from LVM2 without any problem on my home PC for about 1.5 years now.
[131026850040] |Sweetest thing since sliced bread.
[131026850050] |Oh, and that is supported by default by the ubiquity (alternative) installer.
[131026850060] |Try it
[131026860010] |Regarding the second part of the question, it may be useful to place in separate partitions anything that is independent of the current distribution.
[131026860020] |By also leaving extra space available on the drive, this allows, if necessary in the future, to either install a different distribution, or perform a reinstall of the current one, sharing access to anything that you'd want to see on both.
[131026860030] |Obviuos candidates for separate partitions are then /usr/local and /home, as well as /root.
[131026860040] |I personally find it more efficient to create custom partitions, mount them in an arbitrary mountpoint, like /part/data, and then proceed with symlinks, as in:
[131026870010] |Another reason that I think is not mentioned is that you can use the filesystem type and the configurations that you prefer for /boot
which are certainly not the same as the ones when it is used as part of /
. Features like journaling, checksums, etc. are not useful for /boot
and you can make booting faster by deactivating them or using a simpler filesystem (like ext2
).
[131026880010] |How to split and edit patches?
[131026880020] |Sometimes I need to split a big patch into smaller (disjoint) ones, e.g. for every separate feature included.
[131026880030] |Usually I do it via standard vim yank/dd commands and split-window switching.
[131026880040] |But are there some tools/vim-tricks to help with such kind of editing?
[131026880050] |For example support for commands like: move the 3 next complete hunks to right opened patch file
[131026890010] |Somewhat off-topic, I guess but I still think it's useful.
[131026890020] |If you use git to do your development you can easily split your whole changes into smaller "hunks" that embody one feature each.
[131026890030] |You end up with one commit per feature and can use git's git-format-patch
to create (and even sign and properly attribute) patches, I outlined how to do that here
[131026900010] |You might want to take a look into patchutils [1].
[131026900020] |For the vim part, I wrote a small vim plugin that helps with navigating in patches: diff_navigator [2].
[131026900030] |[1] http://cyberelk.net/tim/software/patchutils/
[131026900040] |[2] (I can't post the link because stackexchange.com thinks I'm a spammer.
[131026900050] |Google for "diff_navigator")
[131026910010] |Execute Nohup command with input
[131026910020] |In UNIX, I have a process that I want to run using nohup.
[131026910030] |However this process will at some point wait at a prompt where I have to enter yes or no for it to continue.
[131026910040] |So far, in UNIX I have been doing the following
[131026910050] |So I start the process 'myprocess' using nohup and pipe in a file with 'y' then close the file.
[131026910060] |The lines above are effectively three seperate commands - i.e.
[131026910070] |I hit enter on the first line in UNIX, then I get a prompt where I enter 'y' and then press enter to then finally type 'EOF' and hit return again.
[131026910080] |So this works perfectly but my problem is below...
[131026910090] |I want to know execute this in Perl but I am not sure how I can execute this command as it is over three lines.
[131026910100] |I don't know if the following will work....
[131026920010] |If you just want to write a single y
to the stdin of the process, you can do this:
[131026920020] |If you want to keep writing y
for every prompt that comes up, the coreutil yes
exists for exactly this purpose -- it will keep writing whatever you tell it to to stdout.
[131026920030] |Its default is to output "y", so you can just:
[131026920040] |but if you need something else you can pass it as an argument
[131026930010] |Where do X error messages go?
[131026930020] |If I start an X application from a terminal, the error messages go to the terminal.
[131026930030] |If I start them from a GUI, where do the error messages go, or alternatively how are they dealt with?
[131026940010] |They go to whatever terminal the X server was started from.
[131026940020] |If you boot to command line and then start X, you can see them by hopping back to that VT (hit Ctrl-Alt-F1).
[131026940030] |If you boot directly to X, I'm not sure where the messages show up by default; it may vary by distro.
[131026950010] |Usually all generic X errors (and those of some applications) go to the file ~/.xsession-errors
.
[131026960010] |Display managers such as xdm &gdm usually capture stderr from the processes they start and log it to a file such as /var/log/xdm.log
or /var/log/gdm/:0.log
.
[131026960020] |That includes the output from the X server being started.
[131026960030] |Once your session starts, it may redirect output from the session processes to a file such as ~/.xsession-errors
as tante pointed out in his answer.
[131026970010] |move files with progress bar
[131026970020] |When moving large directories using "mv", is there a way to view the progress(%) ?
[131026970030] |The cp command on gentoo had a "-g" switch to show progress ?
[131026980010] |You can build a patched cp and mv which then both support the -g switch to show progress.
[131026980020] |There are instructions and patches at this page.
[131026980030] |However: The page instructs you to do
[131026980040] |which overwrites the original cp and mv.
[131026980050] |This has two disadvantages: Firstly, if an updated coreutils package arrives at your system, they are overwritten.
[131026980060] |Secondly, if the patched version has a problem, they might break scripts relying on standard cp and mv.
[131026980070] |I would rather do something like this:
[131026980080] |which copies the files to /usr/local/bin which is intended for user compiled programs and gives them a different name.
[131026980090] |So when you want a progress bar, you say mvg -g bigfile /mnt/backup
and use mv normally.
[131026980100] |Also you can do alias mvg="/usr/local/mvg -g"
then you only need to say mvg bigfile /mnt/backup
and directly get the progress bar.
[131026990010] |I dont like the idea to overwrite binarys from coreutil when there are simpler solutions, so here are mine:
[131026990020] |rsync: Rsync copies files and has a -P
switch for a progress bar.
[131026990030] |So if you have rsync installed, you could use a simple alias in your shells dotfile:
[131026990040] |The downside is, that rsync is a little bit slower than cp, but you should messure this with time and decide for your self, I can live with it :-)
[131026990050] |Shell Script: A shell script can also create the progress bar.
[131026990060] |I found this a while ago on the net and I don't remember the source:
[131026990070] |This will look like:
[131026990080] |bar:
[131026990090] |‘bar’ - ‘cat’ with ASCII progress bar
[131026990100] |bar
is a small shellscript to display a process bar for all kind of operations (cp, tar, etc.).
[131026990110] |You can find examples on the project homepage.
[131026990120] |Its also written for the bourne shell, so it will run nearby everywhere.
[131027000010] |First off: I never copy large files without using ionice, unless I know that I will not want to use the computer for half an hour or more.
[131027000020] |Second: all my partitions are jouranled so intrapartition copying takes not time.
[131027000030] |If it is a long copy I do a "du -sm" on the files and "df -m|grep copy_to_partition".
[131027000040] |Then if curious how much more time it will take I do the "df" again and see how much of the files was copied.
[131027010010] |Being kicked out upon logging in using ssh
[131027010020] |Does anyone know why I am kicked out immediately after everytime I log in?
[131027010030] |Here is the output:
[131027010040] |$ ssh tim@xxxx.xxx.xxx.xxx
[131027010050] |Password:
[131027010060] |Linux xxxx 2.6.32-24-server #43-Ubuntu SMP Thu Sep 16 16:05:42 UTC 2010 x86_64 GNU/Linux Ubuntu 10.04.1 LTS
[131027010070] |Welcome to the Ubuntu Server!
[131027010080] |Documentation: http://www.ubuntu.com/server/doc
[131027010090] |You have mail.
[131027010100] |Last login: Fri Oct 1 10:24:01 2010 from xxxx.xxx.xxx
[131027010110] |You are not authorized to log into this server
[131027010120] |Connection to xxxx.xxx.xxx.xxx closed.
[131027010130] |If I am indeed not authorized to log into this server, why I receive the typical information upon logging in, such as the information about the server, the welcome message and the last time I logged in?
[131027010140] |How can I solve this problem?
[131027010150] |Thanks and regards!
[131027020010] |Probably because your shell has been switched to something that prints that message and exits afterwards.
[131027020020] |In that case, SSH will behave just as usual and display any welcome message, biff, etc.
[131027030010] |A google search shows someone might be using this script :-)
[131027030020] |http://ubuntuforums.org/showthread.php?t=1545205
[131027030030] |The default shell is replaced by a script looking for the username in an “allowed users“ file and either starts a standard bash or displays this message and exits.
[131027040010] |Has the software on the server been updated recently?
[131027040020] |Some idiot package maintainers ( and Ubuntu is one of the big violators ), decide that they know better then you what settings you should have and overwrite them.
[131027050010] |how to pass the result of `find` as a list of files?
[131027050020] |Hi :)
[131027050030] |The situation is, I have an MP3 player mpg321
that accepts a list of files as argument.
[131027050040] |I keep my music in a directory named "music", in which there are a few more directories.
[131027050050] |I just want to play all of them, so I run the program with
[131027050060] |.
[131027050070] |The problem is, some file names have whitespace in them, and the program breaks those names into smaller parts and complains about missing files.
[131027050080] |Wrapping the result of find
in quotes
[131027050090] |does not help because all will become one big "file name", which is obviously not found.
[131027050100] |How can I do this then?
[131027050110] |If that matters, I am using bash
, but will be switching to zsh
soon.
[131027060010] |Try using find's -print0
or -printf
option in combination with xargs
like this:
[131027060020] |How this works is explained by find's manual page:
[131027060030] |-print0
[131027060040] |True; print the full file name on the standard output, followed by a null character (instead of the newline character that -print uses).
[131027060050] |This allows file names that contain newlines or other types of white space to be correctly interpreted by programs that process the find output.
[131027060060] |This option corresponds to the -0 option of xargs.
[131027070010] |I think Steven's solution is best, but another way is to use xargs' -I
flag, which lets you specify a string that will then be replaced in the command with the argument (instead of just appending the argument onto the end of the command).
[131027070020] |You can use that to quote the argument:
[131027080010] |With GNU find, you can also use -print0
and xargs -0
, but there's little point in learning yet another tool.
[131027080020] |The -exec ... {} +
syntax gets little mention because Linux acquired it later than -print0
, but there's no reason not to use it now.
[131027080030] |With zsh or bash 4, this is a lot simpler:
[131027080040] |In zsh only, you can make a (part of a) pattern case-insensitive:
[131027090010] |Windows 7 Dual Boot + Virtualization under Ubuntu 10.04?
[131027090020] |Question: I currently have a dual boot: Win 7 x64 Pro &Ubuntu 10.04.1 x64.
[131027090030] |Is there a way to boot Win 7 as a virtual machine under Ubuntu without reinstalling anything, in addition to maintaining the ability to dual boot?
[131027090040] |Background: I have a dual boot system with Windows 7 installed on one partition in a Raid 5 and Ubuntu 10.04.1 installed in a separate partition (actually split across three) in the same Raid 5. I have a Core i7-930 with 6GB of RAM.
[131027090050] |I'd be happy to provide any other hardware specs.
[131027090060] |I require Windows 7 x64 Pro for only a small number of things, basically just VS 2008 / VS 2010 so that I can use nSight from nVidia to debug CUDA / OpenCL projects.
[131027090070] |I must be able to dual boot because (and this is more just my suspicion) I don't want any more between the software and the three graphics cards that I have installed than is absolutely necessary.
[131027090080] |If it means anything, when in production mode where I'm running without virtualization, I have two cards set to exclusive mode and one set to prohibited mode (to drive the display).
[131027090090] |I'm worried that running nvidia-smi under either Ubuntu as the host OS or Win 7 as guest OS might bollux things up.
[131027090100] |I don't know much about Xen, KVM, etc.
[131027090110] |I've played around a bit with them, but I'm more than willing to use any virtualization software as long as it's free and it can accomplish what I want.
[131027090120] |Note that I'm a student -- this is all non-commercial development.
[131027090130] |I can, if absolutely necessary, reinstall everything, but I had many, many problems getting the CUDA environment to work under VS 2010 -- I installed/uninstalled/reinstalled VS '08 &'10 so many times that it corrupted the Win 7 registry and I had to start over from scratch.
[131027090140] |Now that it's working as a dual boot, I'd really like to avoid starting from scratch a fourth time.
[131027100010] |The on-topic part: yes, you can run a virtual machine under Ubuntu.
[131027100020] |CUDA requires direct access to the hardware.
[131027100030] |That means you'll have to run Windows either directly on the hardware or on a virtualization engine that allows a virtual machine to access hardware devices directly.
[131027100040] |That pretty much means hypervisor-based virtualization.
[131027100050] |VirtualBox is definitely out.
[131027100060] |Google suggests that Xen will do.
[131027100070] |Running a single Windows installation in different hardware configurations (such as the bare metal and a virtual machine) is notoriously difficult.
[131027100080] |If you really don't want to install Windows, you might prefer to run Ubuntu in a VM under Windows.
[131027100090] |It's not clear from your question whether you also want to run CUDA programs in Ubuntu.
[131027100100] |If you do, you can boot your existing installation on pretty much any hardware, there's little if any setup required.
[131027100110] |This does require a virtualization system that can bind a disk partition inside a VM, which I think VMWare can do but not VirtualBox.
[131027100120] |(It's also possible with VirtualBox by making a custom initrd with the vboxsfs
module, but that's no longer no-setup-required.
[131027100130] |An alternative method is to clone the system partition to a virtual machine disk.)
[131027100140] |ADDED: You might want to investigate AndLinux, which is a port of Ubuntu to CoLinux, a Linux port running on top of Windows; I don't know whether CoLinux can support CUDA.
[131027100150] |Given your workflow, I think your best bet is Xen.
[131027100160] |If this turns out not to work so that you need to dual boot, note that both OSes support hibernation, so with the right setup you can switch relatively quickly between the two (without needing to log in, restart all programs, etc.).
[131027110010] |I'm unclear on what you are asking.
[131027110020] |Are you asking if there are legal reasons why you cannot use the same license to run from regular boot and a VM, the answer is that Microsoft claims you cannot and Windows authentication requires you to buy a second license.
[131027110030] |If you are asking about the technical problems then I think Giiles has answered them fairly well.
[131027120010] |set up email addresses that aren't attached to a user
[131027120020] |I need to set up a lot of email addresses on a Linux machine, but I don't want to create a new user account for each one.
[131027120030] |The mail can stored in a regular maildir or mbox.
[131027120040] |(I'll be checking the mail through some Perl code running on a cronjob.)
[131027120050] |How do I do this?
[131027130010] |The full answer really depends on what mail server program you are using.
[131027130020] |For both the postfix
and sendmail
you can redirect mail addresses to local accounts using the /etc/aliases
file: any line of the form address: unix-account
will deliver email addressed to address@your.domain
to the mailbox of unix-account
.
[131027130030] |For example, if /etc/aliases
contains a line like this:
[131027130040] |then UNIX user root
will get all the mail addressed to postmaster@your.domain
, without any need for postmaster
to exist as a regular UNIX account.
[131027130050] |Instead of a UNIX account name, you can specify the full path of a file -- mail will be delivered to that file (in mbox format).
[131027130060] |Other redirections are also possible; see man aliases
for details.
[131027130070] |Note: after editing /etc/aliases
, you have to run the command newaliases
(as root, typically) in order to have the mail server pick up the new addresses.
[131027140010] |You can use virtual users (and domains) stored on a database so you don't need to create linux users to the mailboxes and the admin of the mail users and domain is very simple, just adding or removing a record in the database table.
[131027140020] |an example for ubuntu, postfix and mysql
[131027150010] |how to queue a command to run after another command finishes?
[131027150020] |Hi again,
[131027150030] |Sometimes I start a program that takes a very long time to finish (emerge
), then realize that I should go to bed instead of waiting for it.
[131027150040] |If I know this in the first place I would run
[131027150050] |However, now that I have started the program already, how can I "schedule" the computer to shutdown when that process finishes?
[131027150060] |Would Ctrl+z
then fg; halt
be OK?
[131027160010] |Yes, that would work.
[131027160020] |If unsure, you may test it with
[131027160030] |sleep 15
[131027160040] |Ctrl+z
[131027160050] |fg; echo "it works"
[131027170010] |&&
will do it I think.
[131027180010] |Another way is to just type halt
in your terminal while the first command is running; as long as the first command doesn't read input at some point it will sit in the terminal's buffer and the shell will read it when the first program ends
[131027190010] |Step by step guide to setup display resolutions in xorg.
[131027190020] |The default xorg configuration file created does not allow me to set up the display resolution I want (1360x768).
[131027190030] |The auto configuration command xorg -configure
didn't create the proper configuration file either (see below).
[131027190040] |What's steps do I need to follow in order to manually add the resultion I want as one of the available ones?
[131027190050] |Update:
[131027190060] |I'm running this:
[131027190070] |CentOS 5.5, Linux 2.6.18-53.el5
[131027190080] |X Window System Version 7.1.1
[131027190090] |Gnome version 2.16
[131027190100] |Auto-configuration:
[131027190110] |The same hardware with other linux distribution worked fine right after installation, allowing me to set up the display to 1600x900.
[131027190120] |The xorg.config file created during the installation allows me to set only 800x600 and 1024x768.
[131027190130] |If I raun xorg -configure
, then X server won't start (it says that it could not be started with that configuration file).
[131027190140] |I had to manually go back to the previous xorg.config file.
[131027200010] |I recommend you use "xrandr" command for setting up resolution and multiple displays.
[131027200020] |Currently xorg.conf is on it's way to death, more and more Linux distributions rely on xrandr and they tend to ignore xorg.conf.
[131027200030] |PS: Just to be clear, this means you use xrander to set up different stuff for your display, but you will still use the same X.org server.
[131027200040] |PPS: Ah ... and you use xrandr on the fly, after you have xorg started and you have a graphical interface.
[131027200050] |Changes are applied immediately, but they are not saved, so if yo screw up something a restart fixes you problem.
[131027200060] |After you are finished with your setup, just put all the xrandr commands into a file, make it executable, and add it to your DE's startup.
[131027200070] |EDIT (as requested ;) ):
[131027200080] |- xrandr documentation page: http://www.x.org/wiki/Projects/XRandR
[131027200090] |- an online version of 'man xrandr': http://www.manpagez.com/man/1/xrandr/
[131027210010] |Evolution and Exchange Server 2007 without MAPI
[131027210020] |My organization is running an exchange server 2007 with MAPI disabled for security reasons.
[131027210030] |How do I connect with evolution?
[131027210040] |When I connect using the Microsoft Exchange option I get the error
[131027210050] |The Exchange server is not compatible with Exchange Connector.
[131027210060] |The server is running Exchange 5.5.
[131027210070] |Exchange Connector supports Microsoft Exchange 2000 and 2003 only.
[131027210080] |If I use the Exchange MAPI option I get
[131027210090] |Authentication failed.
[131027210100] |MapiLogonProvider:MAPI_E_NETWORK_ERROR
[131027210110] |Which appears to be a network timeout, which confirms that administrators have MAPI turned off.
[131027220010] |As far as I know, this is not possible, at least if you want a reasonably stable solution.
[131027220020] |Which would, at this point, also exclude the Exchange MAPI option, even if it were available.
[131027230010] |Ubuntu won't Hibernate
[131027230020] |With my new built computer Ubuntu 10.4 lucid won't hibernate or suspend.
[131027230030] |It's a custom-built Core i7 on the Gigabyte X58A-UD3R motherboard.
[131027230040] |Hibernate is enabled in the bios.
[131027230050] |Running sudo hibernate
gives:
[131027230060] |hibernate:Warning: Tuxonice binary signature file not found.
[131027230070] |Some modules failed to unload: nvidia hibernate: Aborting suspend due to errors in ModulesUnloadBlacklist (use --force to override).
[131027230080] |After installing, linux-generic-tuxonice and linux-headers-generic-tuxonice, and rebuilding my nvidia dev drivers, it still does not work.
[131027230090] |But I think I'm closer.
[131027230100] |Now, when I run sudo hibernate
I get:
[131027230110] |"gmesg | grep error" returns
[131027230120] |Any ideas how to find out what is on usb9 and why it's failing to freeze.
[131027230130] |The only thing that I have on usb is the keyboard and mouse.
[131027240010] |Hi Halpo,
[131027240020] |According to this thread you need to have Tuxonice installed.
[131027240030] |While I'm not entirely sure about this, can you open synaptic
and look for packages named "tuxonice"?
[131027240040] |Unfortunately, different posts say different things, so all that I can help is this guide.
[131027240050] |Good luck :)
[131027250010] |It might help to find out what usb9 refers to.
[131027250020] |To do this, have a look at /sys/bus/usb/drivers/usb9/{product,idProduct,idVendor}, which should help you what particular USB device is refusing to suspend.
[131027250030] |If it's something you can unplug, try without that.
[131027260010] |Bash autocomplete in ssh session
[131027260020] |It seems that bash doesn't want to autocomplete commands (what's annoying me right now is not autocompleting apt-get) when I'm logged into my machine from SSH.
[131027260030] |Is there some setting that will allow bash to autocomplete inside an ssh session?
[131027270010] |In short: source /etc/bash_completion
should do the trick (run it in the shell within the SSH session).
[131027270020] |Long story: in order for bash completion to work, you have to tell bash
how to complete each command's arguments.
[131027270030] |This requires a long sequence of invocations of the bash
built-in command complete
; therefore, they are usually collected in a separate script (or several ones in /etc/bash.complete.d/*
) that loads them all.
[131027270040] |Being a regular shell script, you can always load the bash_completion
in any shell startup script (~/.bash_profile
, ~/.bash_login
, ~/.bashrc
)
[131027270050] |Further reading:
[131027270060] |section Programmable Completion in the man page bash(1)
[131027270070] |help text for the complete
command (run: help complete
in bash
)