[131032340010] |
Installing new kernel (by commandline)as side of old kernel and effective configuration of ' menuconfig'
[131032340020] |I need to install another kernel(2.6.34) into my fedora machine(x86) and i need to show the old and new boot up options in the boot menu(both new and old kernel)
[131032340030] |I have downloaded the new kernel and i need to compile it and need to build it.
[131032340040] |can you explain me the steps for doing that?
[131032340050] |I got the correct steps from this discussion and am having doubts in the steps 6 and 7 in the below link which explains the installation of new kernel.
[131032340060] |http://www.cyberciti.biz/tips/compiling-linux-kernel-26.html
[131032340070] |Also can you explain the effective configuration of 'menuconfig' and its what it actually aims?
[131032350010] |Hello Renjith,
[131032350020] |Have you tried this wiki page?
[131032350030] |It looks pretty much all that you need.
[131032350040] |Regarding the boot options, what boot loader are you using?
[131032350050] |Grub will probably detect your kernel when you run update-grub
or grub-mkconfig
.
[131032360010] |If you just need any 2.6.34-kernel, you might head over to koji and try to find a precompiled one for you version of fedora.
[131032360020] |You can install it as root after downloading all required rpms with yum localinstall kernel-*.rpm
and it will automatically appear in Grub.
[131032360030] |If you need to modify the kernel, it is best to also start with the distribution kernel and modify it to suit your needs.
[131032360040] |There is an extensive howto in the fedora wiki.
[131032360050] |Lastly if you really need to start from scratch with the sources from kernel.org, you have to download the source and extract the archive.
[131032360060] |Then you have to configure the kernel.
[131032360070] |For this, say make menuconfig
for a CLI or make menuconfig
for a graphical configuration.
[131032360080] |You might want to start with the old configuration of the running kernel, see http://unix.stackexchange.com/questions/2496/recompile-kernel-to-change-stack-size.
[131032360090] |When you are finished configuring, say make
to build the kernel, then make modules
to build kernel modules.
[131032360100] |The following steps have to be done as root: Say make modules_install
to install the modules (this will not overwrite anything of the old kernel) and finally make install
which will automatically install the kernel into /boot and modify the Grub configuration, so that you can start the new kernel alongside the old one.
[131032370010] |Preseeding Ubuntu 10.04
[131032370020] |I need to preseed a dual boot installation of Ubuntu 10.04.
[131032370030] |I want partman to use all already existing Linux partitions and all free space (like the option when installing Fedora) what would that recipe look like?
[131032380010] |I can't remember if I fixed that, but you can take a look at my answer file:
[131032380020] |http://www.north-winds.org/unix/preseed-example.cfg
[131032380030] |It's based off of a more sophisticated example at:
[131032380040] |https://help.ubuntu.com/10.04/installation-guide/i386/appendix-preseed.html https://help.ubuntu.com/10.04/installation-guide/example-preseed.txt
[131032390010] |Prevent KDE and Gnome from showing eachothers icons in the menu
[131032390020] |Often, I have both KDE4 and Gnome installed on my machines.
[131032390030] |What really bothers me when I do this, is that the Gnome menu will show all kinds of things that I will hardly ever use in Gnome, like Konqueror, KMail and Konversation.
[131032390040] |(Just to name a few, the list is obviously much longer.)
[131032390050] |I hate this.
[131032390060] |I would love a way (an easy way) to make sure Gnome only shows Gnome related icons and KDE only shows KDE related icons.
[131032390070] |Of course, you can manually hide all the KDE icons from the Gnome menu, but that sucks **, so I'd rather not go that way.
[131032390080] |Does any have a solution for this?
[131032390090] |A script maybe?
[131032400010] |The menu entries are created from .desktop-files.
[131032400020] |There you can specify if the icon shall be only shown in Gnome, only in KDE or in both with e.g. a OnlyShowIn=KDE line.
[131032400030] |To hide all KDE-apps, you might do as root
[131032400040] |To hide Gnome-Apps in KDE, echo “OnlyShowIn=GNOME” into the Gnome .desktop-files.
[131032400050] |However, when an update to a package arrives, this could override this, so you might to have to repeat it, e.g. with a cron job.
[131032400060] |Make a backup of /usr/share/applications/kde4/ before doing this to be safe if something goes wrong.
[131032400070] |For a list of all information which can be included in a .desktop-file, see http://standards.freedesktop.org/desktop-entry-spec/latest/ar01s05.html
[131032410010] |error using "du" command
[131032410020] |Hi
[131032410030] |I am on the university machine trying to estimate the disk usage I have on my space I do
[131032410040] |from the begining of my account, but an error is given that ./Yesterday/Yesterday is not a device, so I do
[131032410050] |and the error is stopped.
[131032410060] |But is there a more elegant way, and why is this error emerging?
[131032410070] |I can image how the Yesterday of Yesterday gets removed but whats the concepts behind the error.
[131032410080] |Best,
[131032410090] |1) using stat:
[131032410100] |2) using stat on Yesterday/Yesterday
[131032410110] |3) using uname for details
[131032410120] |4) using df on Yesterday/
[131032410130] |5) using df on Yesterday/Yesterday
[131032410140] |6) mount | grep Yesterday
produces nothing
[131032420010] |Your university seems to be using an AFS filesystem and the Yesterday folder contains a read-only snapshot of yesterday's content in the Yesterday
folder.
[131032420020] |Probably the web page of your university's IT department has some information on this.
[131032420030] |Probably you have a quota set for your disk space (i.e. a maximum amount you are allowed to use), you can check your current usage with:
[131032430010] |Mangled history
[131032430020] |In gui mode, when a user has more than one terminal open, how do the terminals rewrite the history file of that user? the reason I ask is because, it is quite possible that in each of the terminals the user ends up executing different commands.
[131032430030] |So, does the history file end up saving the commands from all the terminals or only from the first one to be opened?
[131032430040] |Or is there some other kind of scheme that is employed to tackle this situation?
[131032430050] |Thank you.
[131032440010] |It depends entirely on how the shell chooses to handle it
[131032440020] |bash
by default will overwrite the history file with the local history of each shell as it exits, so the last shell to exit wins.
[131032440030] |The histappend
option will cause it to append to the master history instead (shopt -s histappend
).
[131032440040] |zsh
does the same by default, and has a few options for dealing with it:
[131032440050] |appendhistory
-- The history of each shell is appended to the master history file as the shell exits
[131032440060] |incappendhistory
-- The master history file is updated each time a line is executed in any shell, instead of waiting until that shell exits
[131032440070] |sharehistory
-- Like incappendhistory
, but also pulls changes from the master history file into all running shells, so you can run a command in one shell and then hit Up in another shell and see it
[131032450010] |I originally got this idea from the O'Reilly "Unix Power Tools" book.
[131032450020] |In my .profile I set:
[131032450030] |export HISTFILE=$HOME/.sh_hist.$$
[131032450040] |Every time my .profile gets read, I get a new history file named with the PID of my session.
[131032450050] |If I have multiple logins, each login gets a unique history file.
[131032450060] |Works in ksh and bash.
[131032450070] |If you're just opening new terminals in an X session, those usually aren't login shells, but you can configure them to act as login shells.
[131032450080] |For example, rxvt +ls will start rxvt as a login shell.
[131032450090] |Check the docs for whatever terminal you're using.
[131032450100] |Also, unless you're using a .logout or .bash_logout file (or some other means) to clean up, you'll eventually have a crapload of .sh_hist files.
[131032460010] |How does cached memory work for executables?
[131032460020] |I always understood cached memory in Linux (as in free -m) as pages of memory that can be reused if they are needed again, or quickly freed if more memory is needed by new applications (I found this article to be helpful a few years ago).
[131032460030] |It seems that both executables (e.g., a program like thunderbird) and data (e.g., the content of a log file) can be cached.
[131032460040] |In fact, I don't think there is a distinction between text and executable files on *nix.
[131032460050] |I can see how it could work for data that does not change much (e.g., a text file), but how does it work for programs that are dynamic by nature?
[131032460060] |Surely, the cached memory cannot restore objects that were dynamically allocated?
[131032460070] |Is it only the bytecode (or instructions in the case of scripts) that is cached then?
[131032460080] |EDIT 1
[131032460090] |By cached memory, I mean the memory under the column "cached" when I run "free":
[131032460100] |EDIT 2
[131032460110] |Thanks to ls-lrt who gave me the hint I was missing.
[131032460120] |As this response on SE clearly mentions (should have searched there first), "The cached memory is the disk cache used by the VFS".
[131032460130] |This means that for executables, only the instructions (bytecode, script lines, etc.) are represented under this column and it has nothing to do with things that are dynamically allocated.
[131032460140] |I was under the impression that entire pages of memory (including dynamically created objects) were "cached".
[131032460150] |EDIT 3
[131032460160] |Nice examples about using the disk cache.
[131032470010] |There is basically no difference in the way the Linux kernel treats various types of data in memory: the part of the kernel doing caring about this is called the "virtual memory subsystem", and it only cares whether a certain portion of memory is in use by a program or not.
[131032470020] |The Linux kernel partitions the available RAM into little chunks called "pages".
[131032470030] |Then classifies pages into "in use" (for instance: pages containing code or data for a program that is currently running), and "unused" pages.
[131032470040] |For the "in use" pages, it does not matter whether they contain executable code, text data, Java bytecode or whatever -- the only thing that matters is that they are "in use": they need to be in RAM because that data is constantly being accessed.
[131032470050] |Since RAM is the fastest storage device available, it is a waste to let "unused" pages be inactive, so the kernel "recycles" unused pages to cache data that has been fetched from the disk and could be needed again shortly.
[131032470060] |The kernel has some algorithms to make this prediction; I/O system performance depends to a large extent on how good this algorithms can foretell the actual workload of your computer.
[131032470070] |In addition, to speed up I/O operations, part of the RAM will be used to buffer data that is being written to the disk: you might have noticed that, when you copy a large file to a slow disk (e.g., an USB stick), the cp
command finishes before the data is fully written to the device: this happens precisely because the kernel is holding some data in "free" memory to speed up the (slow) write operation; data will be written back to disk some seconds later, when the cp
program has possibly already finished.
[131032470080] |As soon as data is written to disk, these pages will be again considered free (and re-used for caching data, or moved to the "in use" pool if the need arises).
[131032470090] |As you point out, the "cached" pages can be (relatively) quickly reclaimed by the kernel, in case there is a need to allocate more pages for "in use" data, since the "cache" pages are just holding data that is available from disk (the cached data will be fetched from disk again when it is asked for).
[131032470100] |Further reading:
[131032470110] |Virtual Memory Basics
[131032470120] |Paging and Swapping
[131032480010] |The cache shown in free is file system cache.
[131032480020] |At the file system level, everything is just octets of data.
[131032480030] |Whether application or file data, no difference at that level.
[131032480040] |While it is possible to reload an executable that has been swapped out from file cache (executables are not written to the swap file, they are simply kicked out of memory), this would be rare because the file cache is usually sacrificed first.
[131032480050] |Now, be clear on the distinction between file cache as shown by free and any other kind of memory that may be involved by a running program.
[131032480060] |For it is not clear what you mean by "the cached memory cannot restore objects that were dynamically allocated."
[131032480070] |Any memory in use by the application is not involved with the file cache.
[131032480080] |No memory allocations of any kind by an application are cached by the file cache.
[131032480090] |The file cache is only an intermediary between the disk and the OS.
[131032480100] |To answer dour question: "Is it only the bytecode (or instructions in the case of scripts) that is cached then?"
[131032480110] |The file cache only caches the octets on disk.
[131032480120] |It does not care what memory is used by the application.
[131032490010] |How should I deal with Fedora's short life cycle?
[131032490020] |I have been using Fedora as my primary distribution since very long.
[131032490030] |One thing that "bothers" me is its relatively short life cycle.
[131032490040] |I install its latest release, restore my backup, customize the applications, take a sigh of relief but by then the new release is just around the corner.
[131032490050] |Fedora has a comparatively short life cycle: version X is maintained until one month after version X+2 is released.
[131032490060] |With 6 months between releases, the maintenance period is a very short 13 months for each version.
[131032490070] |Wikipedia
[131032490080] |Once I used pre-upgrade when moving from Fedora 9 to 10.
[131032490090] |It didn't work smoothly.
[131032490100] |The new upgraded Fedora was using the old kernel images of Fedora 9.
[131032490110] |Took me long to figure it out and I had to use live usb to fix it.
[131032490120] |Since then I decided not to use pre-upgrade
or Upgrade an existing installation
option.
[131032490130] |I had some hiccups with applications too.
[131032490140] |Using fresh install
seems safer.
[131032490150] |But now I have to backup all data, along with my scripts and rc files and restore it again.
[131032490160] |This takes time along with installing apps that are not installed by default and removing not-required apps.
[131032490170] |Main problem is customization settings of each application.
[131032490180] |From Firefox only, I would have to export saved passwords, bookmarks, saved sessions, preferences of different extensions, etc.
[131032490190] |Some other applications do not provide option of save/export settings at all.
[131032490200] |So I have to configure each one manually.
[131032490210] |All in all, upgrading to latest release takes time, even longer if my net connection goes down for some reason.
[131032490220] |Each time I upgrade, I cannot take it out my mind that within few months a new release will be knocking my door, and I will have to repeat the whole exercise again.
[131032490230] |What could be a painless and easy procedure to take backups of all data and to restore it?
[131032490240] |I would prefer a command line solution.
[131032490250] |How can I preserve settings of applications, if they do not provide an option to export settings?
[131032490260] |If you are Fedora user, what do you do to keep up with its frequent releases?
[131032490270] |How can I make this whole procedure faster and less painful?
[131032490280] |Its the amount of time and efforts that an upgrade takes altogether which made me post this question.
[131032490290] |How can I make my life easier?
[131032490300] |Any help, suggestions and ideas would be greatly appreciated.
[131032490310] |Thanks for your time.
[131032500010] |I use an external drive, where I backup some of my folders and dotfiles with rsync -avz
once the first snapshot is taken, it only needs very little data to move onto the external drive for backups
[131032500020] |Well, pretty much all of that information is stored in a dotfile or in some dotdirectory.
[131032500030] |All you need to do, is backup those directories.
[131032500040] |That's what I do anyway, and it's been working for years.
[131032500050] |It all depends on what big of a change the next release is. for instance, when the file systems don't change, I don't see a reason to re-install anymore.
[131032500060] |It was all different back when FC6 was around.
[131032500070] |Upgrading was a pain and I usually made fresh installs back then.
[131032500080] |From Fedora 8 onwards, preupgrade was working fine, I didn't have had any issues with it.
[131032500090] |I did however a fresh install for Fedora 13, since I wanted all my hard drives to be formatted in ext4.
[131032500100] |Other than that, upgrading to latest version of Fedora usually works well.
[131032500110] |Usually, keep somewhat track, of what changes you do to the system.
[131032500120] |What files in /etc/
you changed, what programs you compiled yourself, or what libs you put into /usr/lib/
yourself, etc.
[131032500130] |This makes life much easier, as well as a backup, that is constantly kept up to date.
[131032500140] |Preupgrade works fine by now, but when you want to change The file system or so, there's no way around reinstalling.
[131032500150] |The upgrade guide of Fedora will advice you when you should indeed reinstall instead of doing an upgrade.
[131032500160] |The PreUpgrade manual says, it's possible to upgrade from F11 directly to F13, for example.
[131032500170] |I would advice against it.
[131032500180] |Since older Fedoras aren't upgraded, the PreUpgrade package is most likely outdated.
[131032500190] |This won't help, but when you're a OpenBSD user, you need to make most changes manually and you can't upgrade to the latest release from any other than the previous one.
[131032510010] |Hello, I had the exact same problem with my Fedora installation, and the solution is quite simple : partitions.
[131032510020] |Since I have created a /home partition, I format / but all my preferences for every program are staying.
[131032510030] |Just separate your data from your system with partition, and on reinstall, make sure to format only /, and specify your home partition as mounted on /home.
[131032510040] |Have fun!
[131032520010] |RedHat has a tool called kickstart[1] to automate installs.
[131032520020] |Not sure if Fedora offers the same tool or not, but it might help getting the initial install underway.
[131032520030] |I also second the separate /home partition.
[131032520040] |Backups are still necessary in case of a slip of the finger, but it makes life way easy.
[131032520050] |http://www.faqs.org/docs/Linux-HOWTO/KickStart-HOWTO.html
[131032530010] |Why is connecting scanners to Linux such a pain?
[131032530020] |I've run into this problem yesterday, after about six months or so, where I used my scanner for the last time.
[131032530030] |I've installed a new Linux in the meantime.
[131032530040] |I have a Mustek BearPaw 1200 CU Plus.
[131032530050] |It's an old and quite cheap scanner, but it's been working for over six years now, so until it breaks, there's no need to replace it.
[131032530060] |In order to make this scanner run, I have to get a PS1Dfw.usb
, which is the Firmware, that needs to be loaded every time before scanning onto the device.
[131032530070] |After installing sane and the backends, and putting the firmware into /usr/share/sane/gt68xx/
, I could scan the pages I needed.
[131032530080] |But why is this such a pain?
[131032530090] |Printers aren't that hard to connect to Linux, so why's that with scanners?
[131032530100] |And why aren't the firmware's in a package or something?
[131032530110] |The site where I downloaded the firmware from, hasn't been updated since 2007, and is no longer maintained.
[131032530120] |What if it finally goes offline, do we lose support for all gt68xx based scanners?
[131032530130] |Any advice on how to make this simpler is welcomed (I don't use my scanner that often, and I usually do a new installation of the OS in the mean time.
[131032530140] |Then, when I do need my scanner, it's all looking up how to make that damn thing work all over again).
[131032540010] |In most cases, its the variety of drivers that aren't around.
[131032540020] |And unlike graphics cards, wireless cards (nowadays) and printers, scanners aren't used by the majority of users, so there isn't as much effort put into them.
[131032550010] |One reason why some firmwares aren't packaged/included is that sometimes there is no license that allows that, or there is a license that doesn't allow that.
[131032550020] |It seems like in this case, the author of the driver has permission to distribute those files, but there is no info about redistribution (so somebody should ask Mustek for a license that clearly states that is allowed).
[131032560010] |You could always keep an old version of your favourite OS for this task with the old drivers, running in a VM (e.g. VirtualBox).
[131032570010] |cannot boot my ubuntu partition
[131032570020] |I am running SUSE 11.2.
[131032570030] |Ubuntu is on the extended partition /dev/sda5
, but when I boot I get
[131032570040] |This is the Ubuntu entry in menu.lst:
[131032580010] |It could be that your vmlinuz file is not found.
[131032580020] |It could be that this is because it is in the /boot directory on sda5, hence you should change you line to
[131032580030] |or if it is place somewhere else, where ever its place is.
[131032580040] |(You might need to do the same with the initrd.img file)
[131032580050] |Also, check if the vmlinuz and initrd.img files for ubuntu have exactly this name.
[131032580060] |Usually, they have the kernel version and type in the name (i.e. vmlinuz-2.6.35-22-generic)
[131032590010] |You could try the following :
[131032590020] |Check that the locations pointed to by the symlinks actually exist and are the correct files you're looking for.
[131032590030] |For example on my box, vmlinuz -> boot/vmlinuz-2.6.32-25-generic.
[131032590040] |You can the modify your grub configuration to point to the right files as stated in txwikinger's answer.
[131032590050] |Another quick way to debug this is to get a prompt in grub (by pressing C in grub1, not sure about grub2), then you can use tab-completion to list available files, and test it on-the-fly.
[131032600010] |Hard drive/device partition naming convention in Linux
[131032600020] |What is the hard disk drive/device partition naming convention in Linux?
[131032600030] |For example, [hd0,0]
, sd0
, etc.
[131032600040] |What does it actually mean?
[131032600050] |What is the significance of this when i need to install multiple OSes on the same machine?
[131032600060] |How can we relate it to windows partitioning (example: C:\
drive or D:\
drive)?
[131032610010] |On my slackware box, /dev/hda is the first hard drive detected. /dev/hda1 and /dev/hda2 are the first two partitions.
[131032610020] |I can use fdisk to see the partitions.
[131032610030] |On my fedora box, /dev/sda is the first hard drive detected. /dev/sda1 and /dev/sda2 would be the first two partitions.
[131032620010] |The convention changes depending on what you're looking at; hd0,0
looks similar to GRUB, while sd0
is similar to entries in /dev
, but neither matches what I normally see.
[131032620020] |In /dev
:
[131032620030] |IDE drives start with hd
, while SATA (and I believe any kind of serial device) start with sd
[131032620040] |Drives are lettered starting with a
in cable order, so /dev/sda
is the first serial drive, and /dev/hdb
is the second IDE drive
[131032620050] |Partitions on a drive are numbered starting with 1, so /dev/sdb1
is the first partition on the second serial drive
[131032620060] |GRUB 1 doesn't have the distinction between drive types, it's always of the form (hdX, Y)
:
[131032620070] |X
is the number of the drive, starting with 0, so sda
is hd0
, sdb
is hd1
, etc.
[131032620080] |Y
is the number of the partition, starting with 0 (not 1 like /dev
), so sda1
is (hd0, 0)
[131032620090] |I believe GRUB 2 uses a different syntax, but I don't know it
[131032620100] |It's significant when you're installing multiple OSes if you want to put them on separate partitions -- you need to keep track of which OS is where.
[131032620110] |It's really significant anytime you're dealing with unmounted drives; you need to know that /
is on /dev/sda1
and /home
is on /dev/sda2
(for example)
[131032620120] |As far as I know, Windows disks start from disk 0, and partitions don't have any particular numbering.
[131032620130] |Drive letters are assigned however you like and not tied to a particular partition
[131032630010] |(hd0,0)
is Grub syntax.
[131032630020] |(Note that these are parentheses, not square brackets.)
[131032630030] |Grub is a bootloader, that is, a small program that is launched by your computer's BIOS and whose job is to load the operating system. hd0
references the first drive detected by the BIOS, hd1
references the second one.
[131032630040] |The second number is a partition number; Grub 1 starts from 0, while Grub 2 starts from 1.
[131032630050] |See “Naming convention” in the Grub manual if you want more details.
[131032630060] |/dev/sda
, /dev/sdb
, etc, are the default names of hard disks (and other similar storage like flash disks of all kinds, but not CD or tape drives) under Linux.
[131032630070] |The last letter grows in the order in which the disks are detected.
[131032630080] |You may find /dev/hda
, /dev/hdb
, etc, on some Linux distributions. sd
indicates that the disk driver uses a SCSI interface internally, while hd
indicates that the driver uses an IDE interface.
[131032630090] |This is only an internal kernel matter, you can and often do have IDE disks appear as sd
.
[131032630100] |The additional number is the partition number, starting at 1.
[131032630110] |The partitions you're likely to encounter follow the PC partitioning scheme.
[131032630120] |A disk has up to four primary partitions, numbered 1 to 4 (or 0 to 3 in Grub 1).
[131032630130] |It may also have any number of logical partitions, in which case one of the primary partitions cannot contain a filesystem but must instead be an extended partition (a container for the logical partitions).
[131032630140] |Logical partitions are numbered from 5 on (from 4 in Grub 1).
[131032630150] |The names of the device files (e.g. /dev/sda
) used by Linux are in fact assigned by the udev program, and can be configured.
[131032630160] |This is typically useful in advanced situations involving removable media.
[131032630170] |Most of the time, you don't need to care about device names.
[131032630180] |They are referenced in a very small number of places, typically only two: the bootloader configuration (as we've seen, Grub has its own names anyway), and the file /etc/fstab
which lists the filesystems to mount on boot.
[131032630190] |(And even /etc/fstab
does not always reference partitions by names like /dev/sda1
.)
[131032630200] |What matters is mount points, that is, the location (directory) at which each filesystem is mounted.
[131032630210] |Windows uses a completely different naming scheme which is hard to relate to the underlying hardware structure. c:
, d:
, etc, are assigned to partitions of a type that Windows recognizes in a particular order (and there are ways to influence this order).
[131032630220] |Wikipedia has the details.
[131032640010] |Screen fading out on GNOME, without ability to cancel
[131032640020] |Sometimes, on my Arch Linux laptop, using GNOME for the desktop, my screen will fade out after a while of non-activity (even if I'm watching a video).
[131032640030] |This fade out is very slow, and can't be cancelled by mouse movements, keyboard presses etc.
[131032640040] |What is responsible for this, and how do I disable it?
[131032650010] |The fade out is probably the screensaver kicking in. Try to disable it by going to System->Preferences->Look and Feel->Screensaver and disabling "Activate screensaver when computer is idle" if indeed the active screen saver is "Blank screen".
[131032650020] |The fact that the fading out can't be interrupted is a bug it seems.
[131032650030] |E.g. Fedora has a bugreport stating it is a fault of the X server and fixed with an update.
[131032660010] |What is the use/meaning of 'swap area' while linux installation?
[131032660020] |When i installed ubuntu in my x86 machine, I had to configure some memory as 'swap area'.
[131032660030] |What is the use of this memory and what is the importance of this memory in linux file systems?
[131032660040] |How can I determine the exact size of 'swap area' in a machine for the safe working of linux inside in it?
[131032670010] |The swap partition (or file) in linux is equivalent to the page file in Windows.
[131032670020] |It's used for offloading the RAM.
[131032670030] |If the RAM gets full, the OS can use the swap partition as extra RAM.
[131032670040] |As for how to determine your swap size, the rule of thumb is (used to be) 2x the amount of RAM in your machine.
[131032670050] |So if you had 512MB of ram, you would have a 1GB swap partition.
[131032670060] |This rule is largely outdated though.
[131032670070] |So if you have more than say 2GB of ram, you don't really need 4GB of swap.
[131032670080] |I normally make my swap size equal to the ram size + 10%.
[131032670090] |It has to be equal to the RAM size so that you can use suspend to disk features, and then + 10% for good measure.
[131032680010] |You probably mean you configured some space on your hard disk for swap.
[131032680020] |Swap is part of memory management.
[131032680030] |It extends the virtual memory space you have, such that it can be more than your physical memory available, i.e. RAM.
[131032680040] |This allows then to swap pages of memory between RAM and harddisk, however, obviously it will decrease the performance when this happens.
[131032680050] |This way applications can allocate more memory that you have as RAM, even it cannot all be used at the same time.
[131032690010] |A very "untechnical" explanation:
[131032690020] |Swap area is hard drive space that is reserved to act as extra RAM for when your computer needs more RAM than what is available.
[131032690030] |Note that when this happens your computer might slow down noticably.
[131032690040] |The Ubuntu help website recommends that you have double the ammount of RAM as SWAP.
[131032690050] |So if you have 1GB of RAM, you should have 2GB SWAP, however your computer should work fine with less SWAP.
[131032700010] |The Ubuntu Swap FAQ provides some of the answers you ask for.
[131032700020] |There are also a few posts on this very site, that already cover much of the topic:
[131032700030] |Why swap when there is more than enough RAM -- the set of answers given here gives a pretty complete picture of what swap is used for in Linux
[131032700040] |Why use a Linux swap partition than a file
[131032710010] |What is $debian_chroot in .bashrc?
[131032710020] |What is the debian_chroot
variable in my bashrc file? and what is it doing here?
[131032710030] |PS1='${debian_chroot:+($debian_chroot)}\u@\h:\w\$ '
[131032720010] |Chroot is a unix feature that lets you restrict a process to a subtree of the filesystem.
[131032720020] |One traditional use is FTP servers that chroot to a subset of the filesystem containing only a few utilities and configuration files, plus the files to serve; that way, even if an intruder manages to exploit a bug in the server, they won't be able to access files outside the chroot.
[131032720030] |Another common use is when you're installing or repairing a unix system and you boot from a different system (such as a live CD): once a basic system is available, you can chroot into it and do more work.
[131032720040] |The prompt setting includes the content of $debian_chroot
in the prompt, inside parentheses, unless it is empty.
[131032720050] |This variable is initialized in /etc/bash.bashrc
to the contents of the file /etc/debian_chroot
.
[131032720060] |Thus, if you follow the convention to include a name for your chroots at the location /path/to/chroot/etc/debian_chroot
, your prompt will contain an indication of which chroot you're in.
[131032720070] |A program that follows this convention is schroot, a tool for building and using chroots conveniently (I think the original intent was to facilitate having a stable Debian in a chroot inside an unstable or testing Debian, but the program has evolved quite a bit beyond that).
[131032730010] |Meaning of hard disk drive software partitions?
[131032730020] |I would like to know
[131032730030] |What is the exact meaning of primary partitions?
[131032730040] |Why it is named so? and why it is restricted to 4?
[131032730050] |What is meant by extended partitions?
[131032730060] |Why it is named so? and what is the possible number of extended partitions in the hard disk?
[131032730070] |What is mean by logical partitions?
[131032730080] |Why it is named so?
[131032730090] |How it is calculated?
[131032730100] |What are the advantages of these software partitioning?
[131032730110] |Is it it possible to install OS(Linux/windows) in all partitions ?
[131032730120] |If no, why?
[131032740010] |Hard drives have a built-in partition table on the MBR.
[131032740020] |Due to the structure of that table the drive is limited to four partitions.
[131032740030] |These are called primary partitions.
[131032740040] |You can have more partitions by creating virtual ( called logical ) partitions on one of the four primary partitions.
[131032740050] |There is a limit of 24 logical partitions.
[131032740060] |The partition you choose to split into logical partitions is called the extended partition, and as far I understand you can have only one.
[131032740070] |The advantage to logical partitions is quite simply that you can have more than 4 partitions on a disk.
[131032740080] |You should be able to install any OS on all of the partitions.
[131032740090] |See this page for more details
[131032740100] |In the current IBM PC architecture, there is a partition table in the drive's Master Boot Record (section of the hard dirve that contains the commands necessary to start the operating system), or MBR, that lists information about the partitions on the hard drive.
[131032740110] |This partition table is then further split into 4 partition table entries, with each entries corresponding to a partition.
[131032740120] |Due to this it is only possible to have four partitions.
[131032740130] |These 4 partitions are typically known as primary partitions.
[131032740140] |To overcome this restriction, system developers decided to add a new type of partition called the extended partition.
[131032740150] |By replacing one of the four primary partitions with an extended partition, you can then make an additional 24 logical partitions within the extended one.
[131032750010] |We can only have 4 Primary partitions because of the Master Boot Record being 512 bytes.
[131032750020] |64 of those bytes are available for the partition Table.
[131032750030] |A primary partition takes 16 bytes.
[131032750040] |The other 446 bytes are used for the rest of the MBR.
[131032750050] |If you want a really technical answer, you can read the Wikipedia article, http://en.wikipedia.org/wiki/Master_boot_record
[131032750060] |It has images of the MBR and a breakdown of each sections functions, as well as the breakdown of the Partition table and each sections functions.
[131032760010] |What strings should I look for in /var/log/auth.log?
[131032760020] |I wrote a bash command to scan /var/log/auth.log
for messages occurring on the current day indicating unauthorised access.
[131032760030] |Currently it just fetches messages matching BREAK-IN
and unauthorized
.
[131032760040] |What other strings should I search for in /var/log/auth.log
to keep tabs on unauthorized access?
[131032760050] |Here's the script for reference:
[131032760060] |Edit
[131032760070] |Here's the amended command based on Justins suggestions and what I found through Google
[131032770010] |You could look for "Invalid user" which is thrown when someone tries to logon with an account that does not exist.
[131032770020] |It will also throw up "Failed password" when you enter in an invalid password.
[131032770030] |Also, you dont need to use cat to dump the file to grep.
[131032770040] |Grep can already look at the file as its second option.
[131032770050] |'Grep search-criteria /path/tp/file'
[131032780010] |Printing (saving) the last bash input command
[131032780020] |How can I get the last executed command from bash?
[131032780030] |I know that !!:p prints the last command, but it seems I can't use that anywhere except the bash prompt.
[131032780040] |I tried echo !!:p
but it prints
[131032780050] |I want to use this inside the PROMPT_COMMAND
variable so I need to get it as a string so I can just print it out.
[131032780060] |Is there an easy way to do this?
[131032780070] |Am I looking in the wrong place?
[131032780080] |I guess I'm not clear.
[131032780090] |I need to store the last command run so I can re-display it after the output and before the next prompt.
[131032780100] |For example this is what I want the output to look like:
[131032780110] |I'm doing this my changing my prompt in my .bashrc
file
[131032780120] |But !!:p only works correctly from the bash prompt.
[131032780130] |So how can I store the last command so that I can reprint it later?
[131032790010] |You can access the just-executed command line with the history
built-in.
[131032790020] |(I have no idea why history 1
prints the just-executed command line but fc -nl -1
prints the previous commmand, as does fc -nl 0
.)
[131032790030] |This prints a number before the command text.
[131032790040] |Here's a version that removes the number.
[131032790050] |(It may be incorrect if you go beyond 99999 history lines, I don't know how bash formats the number then.)
[131032790060] |(Note that echo -en ..."$prompt_history"...
would expand backslashes in the command line, so I use echo -E
and let the shell expansion generate the control characters with $''
.).
[131032800010] |RS232 chat. Simplest way?
[131032800020] |I'm working in a project in which I will have to deal with serial communications.
[131032800030] |For testing, I want make to 2 computers speak to each other (with my interaction) via RS232.
[131032800040] |Will the next thing work?
[131032800050] |in computer A:
[131032800060] |in computer B:
[131032800070] |Will computer A, receive data from B, or not?
[131032800080] |If I'm wrong, what would be the easiest way to send something to the other computer?
[131032800090] |Ideas are welcome!
[131032810010] |Well, if you want to communicate via the serial port you have to setup the right parameters (baud, stop bit, parity,handshake etc.).
[131032810020] |I used minicom in the past for stuff like using a computer as a serial console terminal to another.
[131032810030] |The cu command is an alternative.
[131032820010] |An easier way, if you know how to program in python, might be for you to use an API available for python called PYSERIAL so that you don't have to worry about doing any of the dirty work of setting up flags or passing parameters to the driver that handles the serial port on your computer.
[131032820020] |Pyserial takes care of all this for you in the background.
[131032820030] |You would have to specify the baud rate at most, and leave the rest as default for a working serial connection between the 2 computers.
[131032820040] |The entire code for enabling such a connection could be at the most a few lines or half a page long.
[131032830010] |Help with PKGBUILD
[131032830020] |As I would expect, there are a lot of Arch Linux users in here, and I think one of you can help me before upload my first PKGBUILD.
[131032830030] |The package I want to build is from a git repo.
[131032830040] |I've read the wiki guidelines but the CVS page is not very complete...
[131032830050] |When I makepkg, the procces seem to go straight, but in some point, it gets stuck.
[131032830060] |I don't know how to proceed.
[131032830070] |Could you help me?
[131032840010] |Firstly pkgdesc
which is short for package description should be filled out.
[131032840020] |Next, you don't need to have empty array's.
[131032840030] |remember the stuff in build
is the same as if you were typing it out to build it.
[131032840040] |You have to run autogen.sh... and I couldn't do that due to some missing gnome dependency (I run KDE).
[131032840050] |You'll also notice that ./configure.ac
isn't executable... so how would you execute it?
[131032840060] |Figure out how to build it by hand and then put that in the build section of the PKGBUILD.
[131032850010] |vmlinuz file for ubuntu
[131032850020] |how do i look for vmlinuz file for ubuntu from suse.
[131032850030] |I try: mount /dev/sda5 /mnt then cd /boot but i dont see the vmlinuz file for ubuntu. where is it?
[131032850040] |The /boot partition is empty.
[131032860010] |IIRC on Ubuntu /boot by default gets it's own partition.
[131032860020] |You could try check
[131032860030] |the line looking somewhat like the following should be your /boot
[131032860040] |or with
[131032860050] |it should look kind of like this
[131032860060] |you notice the flag "Boot" set which means that this is the partition BIOS tries to load the bootloader from... under normal circumstances this should be the /boot partition.
[131032860070] |HTH
[131032870010] |Kernel debugging..
[131032870020] |I would like to study the flow of some linux device drivers and some minimal flow of kernel (threading cum context switching and interrupt management).
[131032870030] |How can I debug the linux kernel?
[131032870040] |What are the basic steps for doing that?
[131032870050] |Recently i successfully compiled and integrated new kernel (2.6.34.7) into my machine running the 2.6.29 kernel.
[131032880010] |It depends on what you really need.
[131032880020] |Probably simple printk() function is gonna be OK for the beginning.
[131032880030] |There is also /proc interface you can use to get useful information from kernel.
[131032880040] |If you need something more complicated, use KGDB (kernel debugger).
[131032890010] |Why does 'set' syntax not work for syntax highlighting in vim?
[131032890020] |On searching how to set syntax highlighting on in vim, I found this page which says that vimrc should have the following line
[131032890030] |Why is it not something like set syntax on
?
[131032890040] |Like we have set nu
, set ai
etc.
[131032890050] |With set syntax on
, I get this error
[131032890060] |and with set syntax
, I get this
[131032900010] |It should be syntax on
or syntax off
.
[131032900020] |set syntax=something
is used to change the current syntax (c, perl, nasm, etc).
[131032900030] |UPDATE: As @garyjohn pointed in comments, it's possible to turn it on/off for the current buffer and its corresponding filetype using set syntax=ON
and set syntax=OFF
.
[131032910010] |What is meant by mounting a device in Linux?
[131032910020] |I've heard the term "mounting" when referring to devices in Linux.
[131032910030] |What is its actual meaning?
[131032910040] |How it handling now unlike older versions?
[131032910050] |I haven't done that manually via the command-line.
[131032910060] |Can you give the steps (commands) for mounting a simple device in Linux?
[131032920010] |Unix systems have a single directory tree.
[131032920020] |All accessible storage must have an associated location in this single directory tree.
[131032920030] |This is unlike Windows where (in the most common syntax for file paths) there is one directory tree per storage component (drive).
[131032920040] |Mounting is the act of associating a storage device to a particular location in the directory tree.
[131032920050] |For example, when the system boots, a particular storage device (commonly called the root partition) is associated with the root of the directory tree, i.e., that storage device is mounted on /
(the root directory).
[131032920060] |Let's say you now want to access files on a CD-ROM.
[131032920070] |You must mount the CD-ROM on a location in the directory tree (this may be done automatically when you insert the CD).
[131032920080] |Let's say the CD-ROM device is /dev/cdrom
and the chosen mount point is /media/cdrom
.
[131032920090] |The corresponding command is
[131032920100] |After that command is run, a file whose location on the CD-ROM is /dir/file
is now accessible on your system as /media/cdrom/dir/file
.
[131032920110] |When you've finished using the CD, you run the command umount /dev/cdrom
or umount /media/cdrom
(both will work; typical desktop environments will do this when you click on the “eject” or ”safely remove” button).
[131032920120] |Mounting applies to anything that is made accessible as files, not just actual storage devices.
[131032920130] |For example, all Linux systems have a special filesystem mounted under /proc
.
[131032920140] |That filesystem (called proc
) does not have underlying storage: the files in it give information about running processes and various other system information; the information is provided directly by the kernel from its in-memory data structures.
[131032930010] |Is it possible to know when you're at the first bash prompt of a terminal?
[131032930020] |This is sort of a contiunation of my last question: printing saving the last bash input command
[131032930030] |Now I want to know if it's possible to know when you're at the first bash prompt of a terminal.
[131032930040] |So I'm displaying the last command run above the current prompt.
[131032930050] |And I'm thinking it would be cool if when you're at a fresh terminal, to display the last N commands.
[131032930060] |I guess I could just echo the last commands inside my .bashrc
, but that doesn't seem like a good idea.
[131032930070] |And also it'd be cool to get the last N commands shown when the scrollback is cleared (via clear
).
[131032930080] |Is any of that (reasonably) possible?
[131032940010] |I guess I could just echo the last commands inside my .bashrc, but that doesn't seem like a good idea.
[131032940020] |And also it'd be cool to get the last N commands shown when the scrollback is cleared (via clear).
[131032940030] |I'm not sure there's another way to do it without modifying the bash source...
[131032940040] |What seems bad about putting it in .bashrc though?
[131032940050] |You should just add an interactivity test so it doesn't print it in any script :
[131032940060] |You could also create an alias for clear which basically does the same.
[131032950010] |Evolution of Operating systems from Unix
[131032950020] |Can you explain the evolution hierarchy of operating systems (Linux and Windows) from Unix?
[131032950030] |Is the Unix OS paid?
[131032950040] |Please explain Unix's origin, then GNU/Linux's origin, and Windows' origin
[131032950050] |What is the relation between Windows and Unix?
[131032960010] |Renjith, there is no "root" operating system.
[131032960020] |History of operating systems is pretty long.
[131032960030] |I'd just recommend you to read next articles on Wikipedia:
[131032960040] |http://en.wikipedia.org/wiki/History_of_operating_systems
[131032960050] |http://en.wikipedia.org/wiki/History_of_Windows
[131032960060] |http://en.wikipedia.org/wiki/History_of_Unix
[131032960070] |http://en.wikipedia.org/wiki/History_of_Linux
[131032960080] |Have fun, it's really interesting stuff...
[131032970010] |This is a highly simplified history of Unix and its derivatives.
[131032970020] |Windows does not figure in it because its history is essentially separate.
[131032970030] |Once upon a time operating systems were complex and unwieldy.
[131032970040] |One day in the late 1960s, Ken Thompson, Dennis Ritchie and a few of their colleagues at AT&T Bell Labs decided to write a simpler version of Multics to run games on their PDP-7, and thus Unix was born.
[131032970050] |AT&T held the rights to the code, and licenses were expensive.
[131032970060] |Many other companies sublicensed Unix and sold their own version.
[131032970070] |Major players included DEC, HP, IBM, Sun.
[131032970080] |Unix variants added their own extensions, often nicking ideas from each other and from academia.
[131032970090] |Meanwhile, in Berkeley, a number of academics were unhappy with the licensing situation and decided to create a version of Unix that didn't include any AT&T-licensed code.
[131032970100] |Thus in the early 1980s the Berkeley Software Distribution, or BSD, became a free variant of Unix.
[131032970110] |BSD first ran on Minicomputers such as PDP-11 and VAXen.
[131032970120] |Meanwhile, on the East coast, Richard Stallman threw a fit when he couldn't get the source code to his printer driver.
[131032970130] |He founded the GNU (GNU's not Unix) project in 1983 intending to make a free Unix-like operating system, only better.
[131032970140] |After a little hesitation, the kernel of this operating system was chosen to be Hurd, which is going to be usable any decade now.
[131032970150] |Many components of the GNU project are included in all current free unices, in particular the compiler GCC.
[131032970160] |Meanwhile, in Finland, Linus Torvalds went on a hacking binge in summer 1991.
[131032970170] |When he woke up, he realized that he'd written an operating system for his PC, and he decided to share it by putting it on an FTP server in a directory called linux.
[131032970180] |The success exceeded his expectations.
[131032970190] |Many people created software distributions including the Linux kernel, many GNU programs, the X Window System, and other free software.
[131032970200] |These distributions (Slackware, Debian, Red Hat, SUSE, Gentoo, Ubuntu, etc.) are what people generally refer to when they say “Linux”.
[131032970210] |Most Linux distributions consist mostly of free-as-in-speech software, though software that is merely free-as-in-beer is often included when no free equivalent exists.
[131032970220] |Other currently existing unices include the various forks of BSD (you get a choice of FreeBSD, NetBSD and OpenBSD, all being free, open and developed through the 'net), as well as a disminishing number of commercial variants targeted towards servers: and AIX, HP-UX, Solaris, and a few very minor contenders.
[131032970230] |Another proprietary unix-based operating system is Mac OS X running on Apple desktops, laptops and PDAs.
[131032980010] |For a really crazy diagram of the evolution of UNIX, see here.
[131032980020] |Not that it's very useful, though :).
[131032990010] |Gilles explained very well the evolution from piece to another here, so I will cover the topic from more broad perspective and give some hints for further research.
[131032990020] |From Bazaars and Research Labs to Closed Blobs and Market-marginalized Groups that I think are not that marginal at all
[131032990030] |The key term to play with evolution is power.
[131032990040] |If you are dependent on an OS, for example in the form of security updates, you are dependent on the software-manufacturer and hence it has power over you.
[131032990050] |It can decide to stop publishing security updates or do any evil that its license allows it to do.
[131032990060] |If the OS is closed, the users must feel helpless because they cannot fix problems on their own, perhaps shown in hypocriticical feelings such as again the damn driver broken, XYZ's fault
.
[131032990070] |In the latter discussion, you can s,OS,sofware,g
and it does not really lose its meaning about power-relationship, clearly some thing being timeless.
[131032990080] |I won't reinvent the damn wheel so please read about Bazaars, corporations and social-environmental-and-other problems below.
[131032990090] |Start
[131032990100] |Homebrew computer club before Apple and such things when things were open
[131032990110] |Ending, Now and Still Evolving
[131032990120] |Amos Batto's essay explaining some reasons behind closing things (Internet Archieve article, cannot be found from Google easily)
[131032990130] |For visualizing the evolution, please, see the picture below from Wikipedia where things started from Bazaar (orange phase) and ended to red-green phase where things are still evolving or even chaotic.
[131032990140] |The picture is wrong or pro-Mimix-advertising in some points, n.b. comments.
[131032990150] |Please, read about the Mimix-Linux -turning point and differentiate the marketing free
, free-as-beer
and free-as-speech
-- the debate here.
[131032990160] |Shortly, Mimix was not free-as-free-speech
and Tanenbaum made money with it while Linus offered his OS with less restrictive license, very important years to understand so do not get misguided by some oddities in the picture.
[131032990170] |This crucial point later affected separate parties such as FreeBSD, Linux and Mimix -formation to their current form.
[131032990180] |Please, note that I don't call them with cohesive terms such as "open-source"
because the term is getting misused.
[131032990190] |When I used the term bazaar in the title, I really meant it.
[131032990200] |It is to some extent chaotic so it is hard to get a large picture but then again there are some very systematic groups.
[131032990210] |The one who can offer the most appropriate solution to current problem will get awarded and can sell his/her products on the market.
[131032990220] |Sometimes, a developer beats a huge 100 heads' dev teams and sometimes contrary.
[131032990230] |Torvards have drawn a good analogy with closed blob and open code (or equivalent in some email) to the science and alchemy.
[131032990240] |I think his point was that while alchemists are extinct in science, you can still find them in Software -area.
[131032990250] |He did not explain it much but my idea is that alchemists today exist in software engineering because it can be useful time-to-time, some practical situations require creative solutions.
[131032990260] |It is a bit like physicists used the sirac-delta-distribution over about 30years (according to my lecturer) before mathematicians agreed that it could be formulated in Mathematics, this phase may take some time.
[131032990270] |But do not underestimate the speculative frenzy in human instincts, it is surprising how many times I have seen people writing something "new", finding it was already invented.
[131032990280] |Welcome back to the bazaar!
[131032990290] |Culture, Money and Intellectual Capital
[131032990300] |The FOSS movement is not a marginal body, please, note that they do have their own things such as music (here or here)and more-and-more hardware (here) -- if your media says something else or nothing, they are ignoramuses.
[131032990310] |The movement is more like a culture -- so the term movement is rather misleading -- with their own slants, habits and even past-times, perhaps hard to grasp the idea but more I get into it, the more I think it is but beware wanna-be-users -- it does no good get involved into meaningless debates about free
and closed
if the terms are not well-defined or documented like here.
[131032990320] |I often find it stupid that people compare this decentralized thing to certain bureaucratic firms, not all of them, because the goal of many innovators per se is many time to have fun instead to create money.
[131032990330] |So the question like "do they get paid?"
is a bit arrogant, did you get paid to be a roman or do you now get paid to be XYZ-citizen?
[131032990340] |Probably not or perhaps -- with a successful endeavor -- you need to wisely choose your camp as always.
[131032990350] |There are however other important things, such as knowledge, responsibility and co-operation, sometimes hard to measure in $.
[131032990360] |Is it actually called IC with business people?
[131032990370] |If so, you may get important skills by engaging to some project, an asset highly appreciated with knowledgeable firms -- but again seen too much wanna-be-reinventing-the-wheel-code so do good research before getting too much involved.
[131032990380] |If you want to know how to get "paid"
with this field.
[131032990390] |I would suggest to research about risk-reward -relationship, perhaps in Money.SO.
[131032990400] |The unix tools are like science, they are very liberal and allow you to do many things.
[131032990410] |It depends on the user whether you get paid or not.
[131032990420] |I think to get paid you need to get into some risky projects like time-consuming/hard/ignored.
[131032990430] |There is no easy way to get paid anywhere.
[131032990440] |Why would there be?
[131032990450] |If there was an easy way, the markets were not efficient.
[131032990460] |The reason why some big corps get paid is that they have taken huge risks and loans and now get rewarded, sometimes their actions are evil and they may get punished.
[131032990470] |But for an individual, I suggest a slow steady advancing.
[131032990480] |To understand why think about unix's early history about research labs, a lot of slow monotone working and prototyping.
[131032990490] |Wanna know more?
[131032990500] |Your questions have too many confusions to attack them easily, such as presupposition about hierarchy
that ignores the idea about chaos and ambiguous terms such as Windows
-- dev branch or branding?
[131032990510] |And the term from Unix
in the title tastes too much appealing-to-populism-in-Unix-quesion-site.
[131032990520] |It is hard to say how /dev/null
such as W. and other closed things evolved because we don't know them, except speculation!
[131032990530] |People who know cannot speak.
[131032990540] |The source is primary, the rest is secondary.
[131032990550] |Be sure which blindfolds, i.e. search engine, you use for this topic, many valuable articles are dumped with irrelevant information as the case with the above removed article.
[131032990560] |As a starting point, you could try some links offered above.
[131033000010] |TL;DR History of UNIX:
[131033000020] |60's: Multics
[131033000030] |70's: Hippies are paid to develop an OS for AT&T. They stole ideas from Multics and timesharing systems, and had to make it super efficient to work on a PDP-11, which is about equivalent in processing power to a NES.
[131033000040] |80's: Companies find that UNIX is pretty neat and try to sell it.
[131033000050] |This includes Microsoft with their variant, XENIX.
[131033000060] |They fight amongst/sue each other, splinter UNIX into incompatible proprietary dialects, and run UNIX into the ground.
[131033000070] |90's: Linus Torvalds + 200,000 people from the Internet hack Minix, an educational UNIX, into a x86 Unix clone called Linux and it grows slowly while Microsoft says "Screw this UNIX crap" and develops NT, stealing ideas from VMS.
[131033000080] |00's: Linux grows, becomes respected server OS. Proprietary UNIXes are on permanent decline.
[131033000090] |Smartphones/mobile devices coming into the picture.
[131033000100] |Google twists Linux into a mobile OS, and other companies flirt with putting Linux on mobile devices.
[131033010010] |How to identify which xterm a shell or process is running in?
[131033010020] |I usually use long running X sessions with several virtual desktops and many many xterms.
[131033010030] |I also use job control in the shell (zsh).
[131033010040] |Sometimes I wish I could identify an xterm a shell or process is running in (or even suspended in) with a simple command.
[131033010050] |For example, you edit a file with vim and vim warns you that it is already opened with another vim instance which is still running.
[131033010060] |But now you have forgotten in which xterm this vim with process-id XYZ was started and suspended.
[131033010070] |Killing it does not work, because it is suspended.
[131033010080] |Resuming it via a signal and then killing could work, but it may screw up an in-foreground running process and perhaps you don't want to kill the vim instance because it has several windows set up ...
[131033010090] |Currently I am using awesomewm, but I am also interested in solutions for other wms.
[131033020010] |You can use ps -o ppid= PID
to get the parent ID of process PID, which will probably be the shell that launched it.
[131033020020] |The parent ID of that shell will be the terminal containing it.
[131033020030] |To test, I spawned a process that would stay around for a while:
[131033020040] |Then I looked up the parent of process 31177, and what command it is:
[131033020050] |31107 is the zsh process I ran the sleep 5m
in.
[131033020060] |I repeated that on the zsh
process
[131033020070] |31097 is the xterm
that my zsh
shell was running in
[131033020080] |If you're not sure how far up the parent you want is, you can use this to search for a parent with the given command name:
[131033020090] |Test:
[131033030010] |Xterm puts the variable WINDOWID
in the environment of its subprocess.
[131033030020] |Its value is the window ID of the xterm window.
[131033030030] |There is no POSIX way of querying the environment of a process based on its PID; here's a Linux way of querying the environment of process $pid
and extracting the WINDOWID
variable:
[131033030040] |You can then find or act on the window with wmctrl
or through your window manager's interface.
[131033030050] |If you use screen, first try the STY
variable, which is set to the name of the screen session.
[131033030060] |You can connect to that session with screen -rd -S "$sessionname"
.
[131033040010] |Linux as a complete development platform ?
[131033040020] |I want to make my Fedora Linux capable of following :
[131033040030] |Use Linux for complete development platform without requiring any other OS installation but still able to build and test programs under different platforms.
[131033040040] |Completely replace Windows machine for all the other work e.g. Office, Paint, Remote Desktop, etc.
[131033040050] |Can you suggest open source projects and tools for achieving above objectives ?
[131033050010] |Define "platform".
[131033050020] |You can't reasonably test software meant for use on Windows or Mac without actually running those operating systems, even if it's possible to use cross-compilation to build it on Linux.
[131033050030] |For real testing on other operating systems, VMware is a great tool.
[131033050040] |The second part of your question should probably be a totally separate question.
[131033050050] |But you can use OpenOffice.org or Google Docs to replace MS Office, Pinta or GIMP or whatever to replace Paint, VNC or Empathy+Vinagre to replace Remote Desktop, etc etc.
[131033050060] |It would be easier to answer these questions if you gave some more specific use cases.
[131033060010] |Weeeeeeel, #1 is quite the doozie, sir.
[131033060020] |I don't know that there is any way to achieve this goal.
[131033060030] |Cross-compiling to other *nixes can be done (with a bit of a headache) on your development box, but how exactly are you supposed to run that code without the appropriate OS and architecture?
[131033060040] |There are extensive forays into system emulation out there, but ultimately this is a cludge.
[131033060050] |If you want to test software on an OS/architecture, run it on that OS/architecture!
[131033060060] |Now, on the other hand, there are tools out there that make it simple to build the same project in vast ranges of configurations.
[131033060070] |Cmake is probably the most well-known and well-tested.
[131033060080] |At least this will assist you in testing your project on other systems, or encourage others to do so for you.
[131033060090] |To be fare, I'm pretty sure this is impossible on other Operating Systems as well.
[131033060100] |#2 is much much easier
[131033060110] |Open Office
[131033060120] |Gimp
[131033060130] |Gnome
[131033060140] |KDE
[131033060150] |Remote desktops
[131033060160] |and on and on.
[131033060170] |I would help more, but your questions acuity sharply drops off.
[131033070010] |You can easily do cross-platform development whether you are a systems programmer, a web developer or a desktop application developer.
[131033070020] |If you are into systems, then any utilities and/or drivers you write for linux are likely to work well for other *nix with very minimal modifications.
[131033070030] |Provided that you write standard C code and don't use too many system specific calls, they may be even easy to port to windows.
[131033070040] |If you are a desktop application dev, you can target GTK, QT or wxWidgets and your app will likely work well across the major 3 platforms today (*nix, Windows, Mac).
[131033070050] |Again, keep system specific calls to a minimum or isolate them into a wrapper library that's going to be system specific.
[131033070060] |You call also target a Virtual Machine like the JVM and/or CLR which will allow application to work across the board.
[131033070070] |If you are a web dev, then you are likely to run into too many different alternatives to choose from.
[131033070080] |I prefer a little web server called Cherokee and I develop and run ASP.NET (mono) and Django apps that run on it and use a PgSQL backend.
[131033070090] |So the conclusion is that cross-platform development in Linux can be done, provided that you can compile the code on the target platform and you keep that in mind while writing your code or if you target a VM.
[131033070100] |The other point is that you may run into The Paradox of Choice and not know what to use.
[131033070110] |For that read below my answer to the second question.
[131033070120] |As to the second question, the best resource I have found is called Open Source Alternatives.
[131033070130] |This web site lists out commercial software and their open source alternatives.
[131033070140] |Almost all the alternatives run on Linux and FreeBSD.
[131033080010] |It depends on your target platform.
[131033080020] |If you are developing for a virtual platform (Java, or even .NET) you should be more then fine.
[131033080030] |There are some platforms that are even friendlier on Linux than on Windows (e.g. Ruby).
[131033080040] |As for Linux as a desktop, you just have to list the software you are looking for.
[131033080050] |There are hardly any windows desktop software nowadays that does not have a Linux alternative.
[131033090010] |MonoDevelop should be able to help you get a handle on the development side:
[131033090020] |MonoDevelop is an IDE primarily designed for C# and other .NET languages.
[131033090030] |MonoDevelop enables developers to quickly write desktop and ASP.NET Web applications on Linux, Windows and Mac OSX.
[131033090040] |MonoDevelop makes it easy for developers to port .NET applications created with Visual Studio to Linux and to maintain a single code base for all platforms.
[131033090050] |(emphasis is mine)
[131033100010] |ZSH autocomplete
[131033100020] |How do I make zsh give me the autcomplete options but not fill in the prompt line with the first result?
[131033100030] |For example, this is the behavior I'm seeing:
[131033100040] |I want it to display the options, but not fill in the prompt line.
[131033100050] |Also, if I type in a valid command, but there are other valid commands that start with that, it doesn't auto complete.
[131033100060] |So
[131033100070] |will give me just su
instead of the option for sudo
[131033110010] |from man zshoptions
:
[131033110020] |AUTO_MENU
[131033110030] |Automatically use menu completion after the second consecutive request for completion, for example by pressing the tab key repeatedly.
[131033110040] |This option is overridden by MENU_COMPLETE.
[131033120010] |What's a good lightweight distribution to run on old hardware?
[131033120020] |I need run a good Linux distribution on an old notebook with Athlon 1.2 and 128MB.
[131033120030] |I tried use a lite version of Win XP and some popular Linux distros but all of them was too heavy to this machine.
[131033120040] |The distro need be ease to use by a beginner user.
[131033120050] |It don't need be small, just light.
[131033120060] |I did a lot of research including here and I can't get a good answer.
[131033130010] |Have a look at this question over on Super User Lightweight GUI Linux distribution for really old computer
[131033140010] |It's not a distribution, and I'm sure I sound like a broken record, but, I find OpenBSD to be an excellent unix like operating system that runs very nicely on older hardware, has excellent manual pages, and generally just works.
[131033140020] |I also find it excellent for a beginner, due to it's excellent FAQ and documentation.
[131033150010] |I have used Damn Small Linux many times to great success for older machines with little in the way of system resources.
[131033150020] |It is based on Debian so you have the benefit of apt/ aptitude and has as I recall a couple of versions that come with different levels of pre-installed software. http://www.damnsmalllinux.org
[131033160010] |Puppy Linux and Arch Linux come to mind
[131033160020] |Arch is a pain to configure,but will give you the best performance you can get from any hardware
[131033170010] |Multiple liveCDs on one USB
[131033170020] |Is there an easy way to install multiple liveCDs ont a USB drive?
[131033170030] |Specifically kubuntu and arch.
[131033170040] |I'm just wondering if there's an easy way to do this since both distros include their own boot loader when installed the way they suggest.
[131033180010] |There is a guide to do exactly this on this question:
[131033180020] |http://unix.stackexchange.com/questions/665/installing-grub-2-on-a-usb-flash-drive
[131033180030] |as the first answer.
[131033180040] |It answers it perfectly (assuming that I know what you are asking) and allows you to have multiple isos on a usb stick, by installing grub.
[131033190010] |downgrading packages with pacman
[131033190020] |What is the best, or most suitable, method of downgrading packages in Arch?
[131033190030] |On normal days I just run the following to accomplish a quick downgrade:
[131033190040] |But today I am downgrading my kernel, which is a very vital part of my linux life.
[131033190050] |I'd like to do it correctly.
[131033190060] |Is this the propper way?
[131033190070] |If not, what is?
[131033200010] |Generally speaking unless there is a dep depending on it (in which case leaving it that way would break something), pacman -U
will do the job correctly.
[131033200020] |Also you should avoid -Rd
when doing things that could break your system unless You know what you are doing, because otherwise the system will complain if you're trying to downgrade in a way that will break a dependency.
[131033210010] |How do I route web requests into my private network depending on the domain in the url?
[131033210020] |Scenario:
[131033210030] |Due to limited number of IPs, several domains are resolved into the same IP address.
[131033210040] |However, these domains may not all run on the same webserver, VM, or even dedicated machine.
[131033210050] |Therefore, I would like to route the domains at the point of entry into the private network to the right place on condition of the domain in the url.
[131033210060] |How can I do this?
[131033220010] |I am unsure if I fully understand your question.
[131033220020] |If I do, you have multiple machines running as HTTP servers behind NAT.
[131033220030] |When some request arrives, you want to forward it to one of your servers, right?
[131033220040] |If you are using apache and mod_rewrite, you might be able to pull it this way:
[131033220050] |http://www.linuxquestions.org/questions/linux-networking-3/apache-forward-request-to-another-box-264864/
[131033220060] |http://www.sematopia.com/2007/09/apache-forwarding-requests-to-another-server/
[131033220070] |Cheers.
[131033230010] |I was looking for the same thing a while ago.
[131033230020] |Never got around to do it but here is what I though might work.
[131033230030] |use the mod_proxy module and use the noproxy directive.
[131033230040] |http://httpd.apache.org/docs/2.2/mod/mod_proxy.html#noproxy
[131033240010] |Most routers/firewalls allow to redirect traffic based on a certain port, e.g. all SMTP traffic (port 25) is redirected to 192.168.1.1.
[131033240020] |But if you have multiple servers to handle your traffic (1 server per domain).
[131033240030] |You need to install something like a reverse proxy (nginx supports this for HTTP, IMAP, POP3).
[131033240040] |For instance, all traffic to port 80 is redirected to 192.168.1.2
which runs nginx and depending on the host name will redirect to either localhost
or 192.168.1.4
.
[131033250010] |I have done something similar using the ProxyPass directive under mod_proxy in Apache.
[131033250020] |First of all, use Name-based Virtual Hosting to allow different domains to be host different sites using the same IP, then for the domains that associated to an internal webserver, you have a config like so:
[131033250030] |This will result in all requests to http://somedomain.com
be transparently proxied to an internal site hosted on http://10.0.1.42
.
[131033250040] |HTH
[131033260010] |How to know the "level" of shells I am in?
[131033260020] |Hi,
[131033260030] |If I open a terminal like xterm
I will have a shell.
[131033260040] |Then if I use ssh
or zsh
I will have another "level" of shell.
[131033260050] |Is there a way to know how many times I have to Ctrl+D
or type exit
to exit all of them?
[131033260060] |My real intention is to exit everything except the "root" shell.
[131033260070] |It will also be nice to know what effect(s) terminal multiplexers (like screen
) have on the solution.
[131033260080] |PS: Please feel free to change the title, I don't know if those are the correct terms.
[131033270010] |One (admittedly not perfect) way is to add the result of ps --no-headers -o comm $PPID
to your shell prompt.
[131033270020] |This will tell you the name of the process that spawned your shell.
[131033270030] |If you're inside a nested zsh, it will say "zsh".
[131033270040] |If you're inside a top-level zsh, it will read "xterm" or "screen" or whatever your shell is running in.
[131033270050] |To tell whether you're inside an ssh or su session, you can simply look at the hostname and username part of your prompt.
[131033270060] |When you're running nested screens (which I don't imagine is a common situation), there's no way I can think of to tell whether you're in the top-level shell of a nested screen, or the top-level shell of the top-level screen.
[131033270070] |You could configure screen to always display a status line, which would cause multiple status lines to be displayed, if you're in nested screens.
[131033280010] |Not very elegant, but you could use tree view in htop to see parent-child relationship of your shell to other running proccesses.
[131033280020] |And use that to deduce the ammount of shells you will need to exit, before ariving at the "root" shell
[131033290010] |You have in fact hit upon the correct term¹.
[131033290020] |There is an environment variable SHLVL
which all major interactive shells (bash, tcsh, zsh) increment by 1 when they start.
[131033290030] |So if you start a shell inside a shell, SHLVL
increases by 1.
[131033290040] |This doesn't directly answer your concern, however, because SHLVL
carries over things like terminal emulators.
[131033290050] |For example, in my typical configuration, $SHLVL
is 2 in an xterm, because level 1 corresponds to the shell that runs my X session (~/.xinitrc
or ~/.xsession
).
[131033290060] |What I do is to display $SHLVL
in my prompt, but only if the parent process of the shell is another shell (with heuristics like “if its name ends in sh
plus optional punctuation and digits, it's a shell”).
[131033290070] |That way, I have an obvious visual indication in the uncommon case of a shell running under another shell.
[131033290080] |Maybe you would prefer to detect shells that are running directly under a terminal emulator.
[131033290090] |You can do this fairly accurately: these are the shells whose parent process has a different controlling terminal, so that ps -o tty= -p$$
and ps -o tty= -p$PPID
produce different output.
[131033290100] |You might manually reset SHLVL
to 1 in these shells, or set your own TERMSHLVL
to 1 in these shells (and incremented otherwise).
[131033290110] |¹ Although one wouldn't think it looking at the manual pages: none of the three shells that support it include the word “level” in their documentation of SHLVL
.
[131033300010] |Application-specific keymapping
[131033300020] |In general, I want to make specific keymaps for application, that working only in it and doesn't affect any other app.
[131033300030] |For example, I already use my Caps key to toggle input language (via xorg.conf), but I want Capslock to behave like Esc in vim.
[131033300040] |Looks like xmodmap doesn't have any options related to that.
[131033300050] |I use Gnome and would also appreciate any third-party applications.
[131033310010] |I found solution in evrouter.
[131033310020] |It maps any keyboard event onto keypress in X.Org if active window title is matched by regexp you specify.
[131033310030] |It also helps me to deal with Zoom key on my Microsoft Natural Keyboard.
[131033310040] |The bad thing is default X keypress also occurs.
[131033320010] |ssh and character encoding
[131033320020] |When I ssh
into my VPS, I have irssi
running in screen.
[131033320030] |When someone sends a unicode character (such as © or €), irssi
displays garbage when I use it via the screen in a ssh
session.
[131033320040] |If I connect to that irssi
using irssi's proxy module, from irssi running on my local computer, it shows up correctly.
[131033320050] |Likewise, if I run ghci on my VPS (outside a screen) and enter in one of those characters, it crashes.
[131033320060] |So, obviously, there is a character encoding issue of some sort with my connection to my VPS, either in ssh or the system setup.
[131033320070] |How can I find out what is causing this, and solve it?
[131033320080] |Details:
[131033320090] |Client system
[131033320100] |Arch Linux x64
[131033320110] |UTF-8 encoding
[131033320120] |VPS system
[131033320130] |Ubuntu Server 10.04
[131033320140] |Unknown encoding used.
[131033320150] |How do I find this?
[131033320160] |(I just have to look in my /etc/rc.conf for Arch)
[131033330010] |Running the locale
command will give you information about your locale settings; the character encoding is given by the LC_CTYPE
setting.
[131033330020] |Under Ubuntu, the default locale settings are given in /etc/default/locale
.
[131033330030] |You can change the character encoding by setting LC_CTYPE
in your ~/.profile
on the VPS, e.g.
[131033330040] |You'll have to make sure that the en_US.UTF-8
locale is available.
[131033330050] |Ubuntu only generates locale data for requested locales.
[131033330060] |All English locales should be available if you have the package language-pack-en-base
installed.
[131033330070] |You can manually request their generation with
[131033330080] |You can also add entries to /var/lib/locales/supported.d/local
to make sure a particular locale is installed (e.g., add the line en_US.UTF-8 UTF-8
).
[131033340010] |What IDE do you use for Mono development on KDE?
[131033340020] |Currently I don't have a Linux installation with a GUI.
[131033340030] |All are running text mode.
[131033340040] |When I do, I usually use KDE.
[131033340050] |On Windows I am a .NET developer and I haven't done any Mono development, yet.
[131033340060] |I heard that Monodevelop is only for GNOME.
[131033340070] |If you develop Mono on a KDE environment, what IDE do you use?
[131033350010] |If you're really QT gungho and just can't stand any gtk+ stuff on your desktop, you might be out of luck.
[131033350020] |If you are, on the other hand, not a library-nazi, may I suggest Monodevelop?
[131033350030] |Monodevelop is an IDE primarily designed for C# and other .NET languages.
[131033350040] |MonoDevelop enables developers to quickly write desktop and ASP.NET Web applications on Linux, Windows and Mac OSX.
[131033350050] |MonoDevelop makes it easy for developers to port .NET applications created with Visual Studio to Linux and to maintain a single code base for all platforms.
[131033350060] |Of course, you can also go write along using Emacs or Vim without any real problems.
[131033360010] |The important thing to note here is that MonoDevelop works fine in KDE.
[131033360020] |It does not require you to use GNOME.
[131033360030] |This is true of pretty much every GTK+ application.
[131033370010] |There is no reason you can't use Monodevelop on KDE.
[131033370020] |All GTK+ apps should work, the only real downside is it might look a little bit alien and its going to pull in a large set of libraries that you "don't need" unless you have other GTK+ apps installed.
[131033370030] |FWIW, I use emacs for most of my Mono development.
[131033380010] |Have you checked out KDevelop 4 or Kate?
[131033380020] |Disclaimer: I don't develop mono and I haven't been able to get the vi bindings in kate to be good enough to replace vim yet.
[131033390010] |How to print specific pages from the command line?
[131033390020] |Is there a way to send a PDF file (or files) to the printer via the command line, but print only, say, odd-numbered pages?
[131033390030] |E.g., lpr -{some option} *.pdf
Or perhaps {some command to get odd-numbered pages} *.pdf | lpr
.
[131033390040] |This would be faster than opening each file, opening the Print dialogue, and telling it to print pages 1, 3, 5, 7, 9...
[131033390050] |The idea is to print all odd pages, then I can print the even numbered pages on the other side of the paper.
[131033400010] |Try
[131033400020] |You can find the documentation of this and other lpr options in the cups documentation.
[131033410010] |An alternative to the cups solution by fschmitt - for example if you only have some limited lpr available - is the command psselect.
[131033410020] |For example for manual duplex printing in a printer without a duplex unit:
[131033410030] |Well, only works if your printer has a rock solid paper transport mechanism ...
[131033410040] |-e selects only the even pages, -o odd ones, and -r reverses the selection
[131033420010] |If you choose to preprocess the PDF (for example because your printing framework is not CUPS and doesn't support page selection), you can do it with pdftk.
[131033420020] |Depending on how cheap your printer is, you may need to print the odd pages in reverse and the even pages in order: move end-1
to the other command.
[131033420030] |If the document has an odd number of pages, take out the last page from the stack and don't feed it back in the second time.
[131033430010] |@ fschmitt : Thanks, that worked perfectly!
[131033430020] |(Sorry, I can't figure out how to comment instead of leaving a whole answer.)
[131033440010] |Is the Zalman ZM-RSSC 5.1 USB Sound Card supported in Linux?
[131033440020] |And to what extent does this card have support in Linux?
[131033450010] |Just boot up your Linux installation and take a look.
[131033450020] |First, find the PCI vendor and device id.
[131033450030] |Look it up with the vendor if you can't find a box with the hardware connected,
[131033450040] |Or (attached to Linux) use lspci -n
[131033450050] |Or (attached to Windows) use regedit.exe and dig into HKLM\System\CurrentControlSet\Enum\PCI
[131033450060] |Next, grep the driver name from /lib/modules/$(uname -r)/modules.pcimap using the PCI vendor and device identifiers.
[131033450070] |First column is driver name.
[131033450080] |Second column is PCI vendor (when grepping, prefix with 0x0000).
[131033450090] |Third column is PCI device (-||-).
[131033450100] |When you've found out if there's a driver or not, you can use Google to find the documentation for said driver and take a look at whether it does whatever it is you want it to do.
[131033460010] |Is the Creative Sound Blaster X-FI Surround 5.1 SB1090 USB sound card supported in Linux?
[131033460020] |And to what extent does this card have support in Linux?
[131033470010] |It exists, though the newer the better as far as this is concerned.
[131033470020] |The methods of getting it working in Ubuntu 8 &9 were kludgey, but worked.
[131033470030] |Strictly speaking, there is driver support.
[131033470040] |Whether or not the sound applications can use it may be another story.
[131033480010] |Mine worked (tested on ubuntu 10.04 &10.10 ).
[131033480020] |But to get multichannel it needs to be connected when the computer is being powered on.
[131033480030] |I have issues also - mine would shut off during 30-90 min. of playback.
[131033480040] |During driver initialization it sends a spike to the amplifier.
[131033480050] |Looks like the firmware/hardware/software is not doing well to prevent it.
[131033480060] |Don`t know yet why.
[131033490010] |grep a tail -f?
[131033490020] |is it possible to do a tail -f
(or similar) on a file, and grep
it at the same time?
[131033490030] |I wouldn't mind other commands just looking for that kind of behavior.
[131033500010] |Using GNU tail
and GNU grep
, I am able to grep a tail -f using the straight-forward syntax:
[131033510010] |It will work fine; more generally, grep
will wait when a program isn't outputting, and keep reading as the output comes in, so if you do:
[131033510020] |Nothing will happen for 5 seconds, then grep will output the matched "test", and then five seconds later it will exit when the piped process does
[131033520010] |It works.
[131033520020] |But take care that output is no longer instantaneous: it gets buffered through the pipe.
[131033530010] |You can use netcat to grep the results of tail -f as new results come in quite easily.
[131033530020] |This sets grep to listen to results for input coming from port 1337.
[131033530030] |The second command pipes the output of tail -f to netcat and sends it out localhost 1337.
[131033530040] |To do it locally you need to switch ttys for each of the two sets of commands, or use something like screen.
[131033540010] |Use the above, i use it usually.
[131033550010] |Globally stop editors from creating ~ files
[131033550020] |Is there a global setting to prevent all text editors from creating backup files?
[131033550030] |I'm sick of changing it in 11 different places.
[131033560010] |As far as I know, there is no single environment variable or configuration setting that is checked by every UNIX editor.
[131033560020] |For Emacs, you can turn off file backups for all files by inserting this into your ~/.emacs
:
[131033560030] |GEdit has a boolean configuration key /apps/gedit-2/preferences/editor/save/create_backup_copy
that you can set with gconf-tool
.
[131033560040] |I'm quite sure there are as many ways to turn backups off as there are editors. :-)
[131033570010] |This actually makes a rather strong argument for "learn one editor well".
[131033570020] |FWIW, the .vimrc statement would be "set nobackup".
[131033580010] |Use a read-only filesystem.
[131033590010] |As others have said, there is no cross-editor configuration options.
[131033590020] |But, here is one pathological solution:
[131033590030] |Write a script that does something like the following:
[131033590040] |Add this script as a cron job that runs every so many minutes.
[131033590050] |If you are just interested in keeping your dropbox folder clean, change the /
to the appropriate folder.
[131033590060] |I'll bet an even more interesting solution can be created by using incron.
[131033600010] |There is an environment variable VERSION_CONTROL
which works for Emacs and other Gnu utilities (unless some gnome inside my computer has been fooling me or something).
[131033600020] |Whether this works for other things I don't know.
[131033610010] |Concept of 'mount point' while installing LINUX
[131033610020] |While installing Linux, it asks for a 'mount point' selection.
[131033610030] |I gave it /
, but I don't know the exact meaning and aim of this.
[131033610040] |Also, now I want to create one more mount point, /home
in my machine with the already installed Linux with mount point /
.
[131033610050] |Is it possible to do that from my current Linux install?
[131033610060] |If yes, what are the steps/commands?
[131033610070] |My understanding of 'mount point' is, when I need to preserve my /home
contents in a safer way that it wont get deleted if my current Linux get corrupted.
[131033610080] |For example, by detaching and connecting the hard disk from the machine with corrupted Linux to a new Linux machine, I should get my /home
content
[131033620010] |The mount point specifies at which location in the directory hierarchy a device or disk partition appears.
[131033620020] |If you want to move /home
to a new partition, you have to create a new partition for it, say /dev/sda4
and format it, e.g. with ext4.
[131033620030] |Creating partitions and formatting them can be comfortably done using e.g. gparted.
[131033620040] |Then you have to copy the old contents to the new partition and modify /etc/fstab
so /home
points to the new partition.
[131033620050] |As root do something like this after having the partition created and formated.
[131033620060] |Again I assume /dev/sda4
for the partition, this is just an example and you have to use your real partition device:
[131033620070] |Now check if your system is still working correctly.
[131033620080] |If it does, add a line like this to /etc/fstab
:
[131033620090] |and delete the backup in /old_home
[131033620100] |if however you find that something went wrong, you can move back by not adding respectively removing the above line in /etc/fstab
and doing as root
[131033620110] |This answer is inspired by the howto on http://embraceubuntu.com/2006/01/29/move-home-to-its-own-partition/
[131033630010] |How can I reproduce commands run on one machine on another machine?
[131033630020] |I would like to install some software on a linux-machine that I have run in VirtualBox.
[131033630030] |Then I would like to do the same thing on a linux-VPS.
[131033630040] |I think that I can save all commands that I run using the history
command.
[131033630050] |Is there any way I could run these commands on another machine?
[131033630060] |Or what is the way to do such things?
[131033640010] |Well, I imagine 2 situations here:
[131033640020] |If I need to run a small number of commands I would just run them again on any other machine (by small number I mean less then 10)
[131033640030] |If I need to run many commands I would put them into a bash script and run the script on all the other machines.
[131033640040] |The script should look like this:
[131033640050] |If you are unsure about the outcome of some commands, you can separate the commands by &&
, meaning that the next command will execute only if the previous was successful. command1 &&command2
means that command2 will not be executed if command1 fails.
[131033650010] |Patkos is right, probably it is best to create a script.
[131033650020] |However, for this, sometimes you have to experiment around till you get it right and need some log of what you did and what was the outcome.
[131033650030] |Here the tool script
comes in handy.
[131033650040] |It creates a record of all activity in the terminal it is running in:
[131033650050] |Script makes a typescript of everything printed on your terminal.
[131033650060] |It is useful for students who need a hardcopy record of an interactive session as proof of an assignment, as the typescript file can be printed out later with lpr(1).
[131033660010] |http://docs.fabfile.org/0.9.2/ <--- Nice replacement to scripts as it can take action if an step fails
[131033670010] |What is the preferred way to install new versions of software?
[131033670020] |I have installed Ubuntu Server 10.10 and now I want to install some software like PostgreSQL, Nginx and PHP.
[131033670030] |But what is the preferred way to get the latest stable version of the software?
[131033670040] |E.g I tried with sudo apt-get install postgresql
but that installed version 8.4 of PostgreSQL but 9.0.1 is the latest version.
[131033670050] |I have had this issue before with NginX.
[131033670060] |The solution was then to download the sourcefiles and compile the latest version which took some time.
[131033670070] |Later a friend told me that wasn't a preferred way to install software.
[131033670080] |Any recommendations?
[131033680010] |Either you are patient and stick with what you have, or you find an official backport, or you find some unofficial backports, or you build your own package.
[131033680020] |The details depend on the particular package.
[131033680030] |For example, in the case of PostgreSQL, you can either wait a few more weeks until the package officially enters some Ubuntu version, at which point also official backports will appear, or in the meantime you can get unofficial packages (albeit from the same packager) at https://launchpad.net/~pitti/+archive/postgresql.
[131033680040] |Building your own packages from scratch or installing from source is probably not recommendable for the type of rather complex software that you mention, unless you are mainly interested in learning the internals rather than using them in production.
[131033690010] |The recommended way is to use the software versions your distribution provides, i.e. sudo apt-get install postgresql
is correct.
[131033690020] |This might not always be the most recent version released upstream, but in most cases one doesn't really need the latest one.
[131033690030] |If you (think you) need the very latest version of everything, you might want to use a distribution which is very fast with releasing new versions for its packages.
[131033690040] |E.g. while Ubuntu for many packages only releases new version upgrades with its half-yearly major update, Fedora more often adds new versions as updates to the current distribution incarnation.
[131033690050] |Other distributions like Gentoo give releases even faster.
[131033690060] |(However, all mentioned distribution gives security updates in a timely fashion of course.)
[131033690070] |If you in general are happy by getting the latest versions, e.g. twice a year and only have one or two packages for which you need the latest and greatest, you can stay with Ubuntu or a similar distribution and either build the packages for which you need the latest version from source and install e.g. to /opt or /usr/local or you can try to get pre-build packages from the development tree of your distribution.
[131033700010] |If this is something that's very important to you, I'd consider switching to a more "bleeding edge" distribution; distros like Ubuntu follow a fixed release schedule, and also like to wait a bit before software has been through testing before shifting into their stable package repositories.
[131033700020] |(Of course it's much more up to date that some others, like Debian stable.)
[131033700030] |Personally I use Arch Linux.
[131033700040] |Packages get updated to the newest release very quickly in most cases, and it's a "rolling release" distro so you can always be up to date.
[131033700050] |The counterbalance is that sometimes things break with updates and you have to keep on top of things to make the necessary adjustments yourself rather than waiting for a package management team to do it for you.
[131033700060] |A good place to learn about which distributions are more or less up to date is OpenSourceWatershed.
[131033710010] |understanding "mount" as a concept in the OS
[131033710020] |I want to understand what mounting is.
[131033710030] |It is used in different contexts and situations and I can't find resources which:
[131033710040] |Describe the mount concept
[131033710050] |Explain the actions taken by the computer/OS/utility when a mount is performed
[131033710060] |How and in which situations mount is used
[131033710070] |Which features in the Linux mount command are of frequent use and some examples
[131033710080] |( I hear mount applied to diverse entities directories, flash drives, network card, etc )
[131033720010] |As fschnitt points out, a comprehensive answer to this would likely be a chapter in a systems administration manual, so I'll try just to sketch the basic concepts.
[131033720020] |Ask new questions if you need more detail on specific points.
[131033720030] |In UNIX, all files in the system are organized into a single directory tree structure (as opposed to Windows, where you have a separate directory tree for each drive).
[131033720040] |There is a "root" directory, which is denoted by /
, which corresponds to the top directory on the main drive/partition (in the Windows world, this would be C:
).
[131033720050] |Any other directory and file in the system can be reached from the root, by walking down sub-directories.
[131033720060] |How can you make other drives/partitions visible to the system in such a unique tree structure?
[131033720070] |You mount them: mounting a drive/partition on a directory (e.g., /media/usb
) means that the top directory on that drive/partition becomes visible as the directory being mounted.
[131033720080] |Example: if I insert a USB stick in Windows I get a new drive, e.g., F:
; if in Linux I mount it on directory /media/usb
, then the top directory on the USB stick (what I would see by opening the F:
drive in Windows) will be visible in Linux as directory /media/usb
.
[131033720090] |In this case, the /media/usb
directory is called a "mount point".
[131033720100] |Now, drives/partitions/etc. are traditionally called "(block) devices" in the UNIX world, so you always speak of mounting a device on a directory.
[131033720110] |By abuse of language, you can just say "mount this device" or "unmount that directory".
[131033720120] |I think I've only covered your point 1., but this could get you started for more specific questions.
[131033720130] |Further reading: * http://ultra.pr.erau.edu/~jaffem/tutorial/file_system_basics.htm
[131033730010] |In Unix everything is a file.
[131033730020] |These files are organized in a tree structure, beginning at the root /
.
[131033730030] |Your filesystem or filesystems will then be mounted at the appropriate places in your /
according your /etc/fstab
file.
[131033730040] |This file contains information about your filesystems, which device they belong to and to which point they will get mounted to - the mountpoint.
[131033730050] |Thats the "mount concept".
[131033730060] |It is not limited to disks and other blockdevices, here are some examples involving mount:
[131033730070] |Mount a representation of your running kernel under /proc
[131033730080] |Mount a special log partition (other device, "logfriendly" filesystem) under /var/log
[131033730090] |Install different systems and mount just one home directory
[131033730100] |Mount remote directories for example via NFS to your system
[131033730110] |Mount a image of a cd to a specific directory
[131033740010] |Weird stuff going on in terminal and emacs since upgrade to Ubuntu 10.10
[131033740020] |I can't really describe the slightly odd behaviour of my terminal window and occasionally also on emacs, but here's a picture of it happening:
[131033740030] |If I press CTRL+L, it disappears.
[131033740040] |What could cause this?
[131033740050] |It started happening once I upgraded to Ubuntu 10.10, so I expect something broke there.
[131033740060] |It's a zsh shell using xterm.
[131033740070] |The computer in question is an Acer Aspire Timeline 4810T.
[131033750010] |This is a bug with xorg and a fix exists.
[131033760010] |Unix and Linux - Stack Exchange is for users of Linux, FreeBSD and other Un*x-like operating systems.
[131033760020] |If your question is about:
[131033760030] |Using or administering a *nix desktop or server
[131033760040] |The Unix foundation underlying MacOS (but generally not frontend application questions)
[131033760050] |The underlying *nix OS on an embedded system or handheld device (e.g. an Android phone)
[131033760060] |Shell scripting
[131033760070] |Applications packaged in *nix distributions (note: being cross-platform does not disqualify)
[131033760080] |UNIX C API and System Interfaces ( within reason )
[131033760090] |then you're in the right place.
[131033760100] |Note that Ubuntu posts are a special case.
[131033760110] |If your question applies to Ubuntu only, or you're looking for answers that are Ubuntu-specific, you should post it on the Ask Ubuntu Stack Exchange site.
[131033760120] |If your question applies to other distros or you welcome more generic solutions, you're in the right place here.
[131033760130] |Cross-posting is strongly discouraged -- if you post on one site and then change your mind it can always be migrated to another.
[131033760140] |If you're not sure if your question is on-topic, ask on meta or just give it a try and the community will decide.