[131035460010] |
What to use to harden Linux box? Apparmor, SELinux, grsecurity, SMACK, chroot?
[131035460020] |I am planning to go back to Linux as a Desktop machine.
[131035460030] |I would like to make it more secure.
[131035460040] |And try a few hardening techniques, especially since I plan to get my own server.
[131035460050] |What would be a good, sane hardening strategy?
[131035460060] |Which tools should I use - Apparmor, SELinux, SMACK, chroot?
[131035460070] |Should I use just one tool, e.g. Apparmor, or a combination of the above?
[131035460080] |Which advantages/disadvantages do these tools have?
[131035460090] |Are there any others?
[131035460100] |Which do have a sane configuration to security (improvement) ratio?
[131035460110] |Which one would I rather use in a Desktop environment?
[131035460120] |Which one in a server environment.
[131035460130] |So many questions.
[131035460140] |Thx for your help in advance.
[131035470010] |AppArmour is usually thought to be simpler then SELinux.
[131035470020] |SELinux is quite complex and may be used even in military applications while AppArmour tends to be simpler.
[131035470030] |SELinux operates on i-node level (i.e. restrictions are applied in the same way as ACL or unix permissions - on the other hand ) while AppArmour apply at path level (i.e. you specify the access based on path so when path changes it may not apply).
[131035470040] |AppArmour can also protect subproccesses (like mod_php only) but I am somehow sceptic about real use of it.
[131035470050] |AppArmour seems to find its way into mainline kernel (it is in -mm IIRC).
[131035470060] |I don't know much about SMACK but it looks like simplified SELinux from description.
[131035470070] |There is also RSBAC if you would like to look at it.
[131035470080] |chroot have limited scope of use and I don't think it would be much of use in desktop enviroment (it can be used to separate daemons from access of whole system - like DNS daemon).
[131035470090] |For sure it is worth to apply 'generic' hardening such as PaX, -fstack-protector etc.
[131035470100] |Chroot you can use when your distro supports so does AppArmour/SELinux.
[131035470110] |I guess SELinux is better suited for high security areas (it has much better control over system) and AppArmour is better for simple hardening.
[131035470120] |In general I wouldn't bother to hardened much generic desktop (except switching off unused services, update regulary etc.) unless you work in highly security area.
[131035470130] |If you want to secure anyway I would use what your distro is supporting.
[131035470140] |Many of them to be effective needs the application support (for e.x. compiling tools to support attributes, written rules) so I would advice to use what your distro is supporting.
[131035480010] |What are the new features available to bash 4?
[131035480020] |I was using bash3 up to now, and just got a copy of the latest version
[131035480030] |What are the major changes/feature additions in version 4 over version 3?
[131035490010] |The NEWS file lists the features that were added in each version.
[131035490020] |Most notably in 4.0 I see a couple of improvements to the autocompletion infrastructure, the (optional) addition of the **
glob operator, associative arrays and various syntactic shortcuts.
[131035500010] |How to make sure that I use the latest version of bash?
[131035500020] |I just setup bash v4.1.0 on my machine.
[131035500030] |The old version is under /bin/bash
.
[131035500040] |Is there a way to use the newer version of bash without replacing the old one from /bin
?
[131035500050] |I want the newer version to be used as the default one
[131035510010] |You can set your user's default shell using the usermod
command with the -s
option (requires root-privileges):
[131035520010] |First of all, make sure your favorite bash is in $PATH
before the system one.
[131035520020] |Also set the environment variable SHELL
to the full path to your favorite interactive shell (presumably your own installed version of bash).
[131035520030] |You can't set your own shell as login shell without root's intervention, but usually you don't need anything better than POSIX sh in ~/.profile
anyway.
[131035520040] |You can put exec $SHELL
at the end of ~/.profile
so that you get your favorite interactive shell in console or ssh sessions, but make sure that you don't do it from shells that are not interactive (for example, some Linux distributions have X session scripts that start with #!/bin/bash --login
).
[131035520050] |It is difficult to detect precisely when it is reasonable to call exec
in .profile
, but testing that the shell is interactive works in most cases; you can do something like this:
[131035520060] |If you're not root, that's about all you can do.
[131035520070] |In particular, scripts headed #!/bin/bash
will keep on using the system bash, while scripts headed #!/usr/bin/env bash
will use yours.
[131035520080] |If you are root, you can add your bash installation to /etc/shells
, after which you can use chsh
to change your login shell to it.
[131035520090] |You can also replace /bin/bash
by your own version, but I wouldn't recommend this for such a critical system component.
[131035520100] |If you really want to replace /bin/bash
, at least keep your package manager happy; on a dpkg-based system, you can use dpkg-divert
.
[131035530010] |I/O-overhead of dm-encrypted device?
[131035530020] |What would be the Read-/Write-overhead when using dm-crypt (LUKS) as full disk-encryption (including root partition) on a Linux-Desktop (Ubuntu)?
[131035530030] |I plan to stack it like so: LUKS >LVM >ext4 The CPU in use on the system would be a Core2 Duo 2.1 GHz with 4GB of RAM.
[131035530040] |Would encryption of such a system create a great/noticeable overhead?
[131035530050] |Are there any recent benchmarks to be found on the net?
[131035530060] |What is your personal experience?
[131035530070] |Are there any settings I can make to improve performance?
[131035530080] |Thx for your help.
[131035540010] |There is no I/O-overhead involved in dm-crypt - just CPU overhead ... ;)
[131035540020] |On a Athlon 64 2.6 GHz dual core system for example I can copy from one dm-crypt disk to another with ~ 40 MB/sec (2.6.26 Kernel, Seagate 1.5 TB SATA disks).
[131035540030] |For performance make sure that the for your architecture optimized aes module is loaded, e.g.
[131035540040] |And for data-safety keep in mind that dm-crypt does not support barriers (perhaps they fixed it eventually?).
[131035540050] |Thus, a power-loss or something like that may screw your FS.
[131035540060] |A workaround is to disable the write-caches of the underlying disks via hdparm, which of course really hurts performance.
[131035540070] |Btw, one can argue that it does not really makes sense to encrypt the root-partition.
[131035540080] |However, encrypting swap does make sense.
[131035550010] |Ext4 may be a bad choice of filesystem if you are planning on performing LVM snapshots.
[131035550020] |I would advise doing substantial disk performance checking before going live, experiementing with block sizes on both FS and LVM.
[131035550030] |My experience was with Ext3, but the other articles I saw at the time implied that Ext4 had simular problems.
[131035550040] |I solved it by using XFS as the filesystem.
[131035560010] |Optimal Linux Distro to Install to SD card to use as USB-Bootloader for PC?
[131035560020] |(Copied from andoid stackexchange)
[131035560030] |Many people install Linux-based distros on USB drives to use for, largely, basic PC troubleshooting.
[131035560040] |Has anyone tried installing such a thing on their Android phone's SD card?
[131035560050] |That way instead of carrying about both a USB drive and a phone, one could simply plug their phone in, boot the PC from USB, and set about performing whatever actions are necessary.
[131035560060] |Does anyone have any recommendations for particular flavors of Linux which are best suited to this task?
[131035570010] |Well, I guess if your Android phone is able to act as an USB stick to the PC then every distribution installable on a normal USB stick will do it.
[131035570020] |For troubleshooting stuff GRML is really great.
[131035570030] |They also have instructions how to install it on a USB stick (as a 'live-usb-stick').
[131035580010] |Shomehow the rx/tx-counters on the interface resets
[131035580020] |When doing ifconfig from hour to hour I notice that the counters for RX/TX bytes transfers resets:
[131035580030] |RX bytes:921640934 (921.6 MB) TX bytes:4001470884 (4.0 GB)
[131035580040] |How come?
[131035580050] |I would like to keep track how much data i transfer from day to day but they keep resetting.
[131035590010] |Seems like counters are 32bit integers so they "wrap around" at ~4GB.
[131035600010] |I believe that MrShunz's answer is correct.
[131035600020] |However, not all hope is lost.
[131035600030] |If you are interested in keeping statistics on how much you transfer each day, you might consider vnstat.
[131035610010] |How to route specific adresses through a tunnel?
[131035610020] |There are certain websites/services which I can only access from the subnet on which my server is located (think of the typical intranet scenario).
[131035610030] |Is there a way to transparently route traffic that go to these addresses through an SSH tunnel?
[131035610040] |Consider the following setup:
[131035610050] |My laptop is connected on the home network.
[131035610060] |It cannot access services on ips X and Y directly.
[131035610070] |I have an SSH tunnel to a server which is on a subnet that can actually access these services.
[131035610080] |Can I somehow automatically encapsulate all the traffic to the subnets of X and Y to go through this tunnel, without having to run the entire VPN solution that would send all my traffic through the server?
[131035610090] |In other words: all traffic that goes to any other subnet should still go directly from the laptop, without passing through the server (using the tunnel).
[131035620010] |Disclaimer: I have not actually tested what I'm going to describe, and indeed it can be completely wrong, but your question is so intriguing that I cannot resist the temptation to draft an answer. :-) Also, the setup here depends on some iptables
functionality that might exist only on Linux.
[131035620020] |Assuming you want to connect from your laptop to a specific port P1 on server X1, to port P2 on server X2, etc. -- I am going to describe how to route TCP traffic to these specific server+port pairs through an SSH tunnel.
[131035620030] |Note: the IP addresses X1,X2,etc are the IP addresses of server as seen from the gateway host (the one you SSH into).
[131035620040] |Select some unused local ports L1 (e.g. 10000), L2 (e.g. 10001), etc.
[131035620050] |Ports L1, L2, ..., must be all distinct and their number should be equal to the number of distinct (Xn,Pn) server+port pairs.
[131035620060] |Use iptables
to redirect packets directed to Xn:Pn onto localhost:Ln
[131035620070] |iptables -t nat -A OUTPUT -p tcp -d X1 --dport P1 -j DNAT --to-destination localhost:L1 iptables -t nat -A OUTPUT -p tcp -d X2 --dport P2 -j DNAT --to-destination localhost:L2
[131035620080] |Now SSH to the gateway, using the -L
option to tunnel traffic from localhost:Ln to (Xn, Pn):
[131035620090] |ssh gateway.server -L localhost:L1:X1:P1 -L localhost:L2:X2:P2 ...
[131035620100] |Example:
[131035620110] |Caveats:
[131035620120] |it only works for TCP, if it works at all...
[131035620130] |if you want to access more than one server, it's probably less work to setup a VPN
[131035620140] |it might still be easier to use SSH -D
option to simulate a SOCKS proxy and tunnel all your traffic through that.
[131035630010] |subnets of X and Y to go through this tunnel, without having to run the entire VPN solution that would send all my traffic through the server?
[131035630020] |What you want is the definition of a VPN.
[131035630030] |A VPN should not
[131035630040] |send all [your] traffic through the server?
[131035630050] |If it is, it is not setup properly.
[131035630060] |It is assumed that any machine your trying to get access to via a Tunnel or VPN, by definition, not accessible via the Internet.
[131035630070] |So, only the needed, non-Internet routable address should be routed down the VPN.
[131035630080] |If you have a more complicated situation, like only machine X and Y and nothing else.
[131035630090] |Your IT staff can put those on a subnet for you.
[131035630100] |Then on your client computer, only route that subnet down the VPN.
[131035640010] |You can specify the interface through which to route traffic in the routing table:
[131035640020] |Where host.com is the hostname or ip that you want to access through the interface, and ppp0 is the link identifier for your vpn shown with the ifconfig
command.
[131035650010] |Recent versions of OpenSSH support tun/tap network devices for true VPN support.
[131035650020] |See https://help.ubuntu.com/community/SSH_VPN for some basic documentation (obviously intended for Ubuntu, but the basic principle applies elsewhere.)
[131035660010] |Can I somehow automatically encapsulate all the traffic to the subnets of X and Y to go through this tunnel, without having to run the entire VPN solution that would send all my traffic through the server?
[131035660020] |This is a bit strange at first glance because that's what a VPN will do for you.
[131035660030] |SSH tends to be a point-to-point affair, the idea being that you connect one port on your local machine to the port of a remote machine elsewhere; it really wasn't designed for the type of traffic you envision.
[131035660040] |In other words: all traffic that goes to any other subnet should still go directly from the laptop, without passing through the server (using the tunnel).
[131035660050] |Again, a VPN would take care of that.
[131035660060] |If you are concerned about a "heavyweight" solution to getting secure VPN traffic (i.e. you don't want to monkey with it because it would be too complicated) you should really look at OpenVPN, which will do exactly what you are describing.
[131035660070] |No only would it encapsulate all of the traffic, but it can be done in a way that only traffic destined for those subnets will make the trip over the VPN pipe.
[131035660080] |I will warn you that you will still need to edit a text file on the local and remote machine(s), but it's fairly easy to get running.
[131035660090] |For your purposes, because you do not want the party in the middle (a server) to see your traffic, you would set up the VPN to connect directly from your machine to the remote machine.
[131035660100] |Any routed packets would be encrypted before leaving your laptop, so you would have 100% end-to-end coverage.
[131035670010] |Grabbing the first [x] characters for a string from a pipe.
[131035670020] |If I have really long output from a command (single line) but I know I only want the first [x] (let's say 8) characters of the output, what's the easiest way to get that?
[131035670030] |There aren't any delimiters.
[131035680010] |One way is to use cut
:
[131035680020] |This will give you the first 8 characters of each line of output.
[131035680030] |Since cut
is part of POSIX, it is likely to be on most Unices.
[131035690010] |These are some other ways to get only first 8 characters.
[131035690020] |And if you have bash
[131035700010] |How do I take a list and remove it from a file?
[131035700020] |I have a long list of domain names that I need to remove from /etc/remotedomains.
[131035700030] |They're probably not in any particular order in the file.
[131035700040] |Each domain is on one line.
[131035700050] |How could I iterate through the list and find that line in remote domains and remove it.
[131035710010] |The -v
tells grep to only output lines that don't match the pattern.
[131035710020] |The -f list
tells grep to read the patterns from the file list
.
[131035710030] |The -F
tells grep to interpret the patterns as plain strings, not regular expressions (so you won't run into trouble with regex meta-characters).
[131035710040] |The -x
tells grep to match the whole line, e.g. if there's a pattern foo
that should only remove the line foo
, not the line foobar
or barfoo
.
[131035720010] |How could I simplify this command to only use awk?
[131035720020] |the format of /etc/userdomains is
[131035730010] |You can use gsub
in awk
to remove all :
s in the string.
[131035740010] |awk
has a sub(regexp, replacement, target)
function that finds the first occurrence of regexp
in target
($0
by default) and replaces it with replacement
(in-place):
[131035750010] |You could use tr
instead of sed:
[131035750020] |awk '/user/ {print $1 }' /etc/userdomains | tr -d ":"
[131035750030] |Though I don't see how that's better than just using awk (nor do I see what's wrong with sed).
[131035760010] |The easiest way is to set the field separator to ":"
[131035770010] |How can I display the time when a command was executed in my bash prompt?
[131035770020] |Currently my bash prompt looks like this
[131035770030] |The problem is that \t
will only display the time when the prompt was rendered (which would be very close to the time of completion of the last command).
[131035770040] |For me, it would be more useful to display the time at which the current command process was started.
[131035770050] |Is there a way to do this?
[131035780010] |No, because there could be any length of time between the prompt and you entering your command, and then pressing the ENTER key. You could set up an alias for a non command (say doit) that displayed the time and then execute the command it was passed. It would require you remembering to use it every time, and it would only work on systems (and accounts) that you'd configured it on.
[131035790010] |If I understand you correctly, you want to change the prompt when you start the current command.
[131035790020] |I don't think bash has a prompt-changing feature, but you can perhaps redraw over the prompt, if you can locate it (not so easy for multi-line commands).
[131035790030] |In zsh, you would use the precmd
function.
[131035790040] |Bash doesn't have a similar feature, but it can be hacked up.
[131035800010] |What does "rc" in .bashrc stand for?
[131035800020] |Is it "resource configuration", by any chance?
[131035810010] |As is often the case with obscure terms, the Jargon File has an answer:
[131035810020] |[Unix: from runcom files on the CTSS system 1962-63, via the startup script /etc/rc] Script file containing startup instructions for an application program (or an entire operating system), usually a text file containing commands of the sort that might have been invoked manually once the system was running but are to be executed automatically each time the system starts up.
[131035810030] |Thus, it would seem that the "rc" part stands for "runcom", which I believe can be expanded to "run commands".
[131035810040] |In fact, this is exactly what the file contains, commands that bash should run.
[131035820010] |Another expansion - run control
[131035820020] |On Tue, 4 Nov 2003, goldwyn rodrigues wrote:
[131035820030] |Does anyone know what RC (in bashrc/mailrc/... ) means or how it originated?
[131035820040] |I mean, is it an acronym?
[131035820050] |If yes, what does it stand for?
[131035820060] |'rc' stands for 'run control' and is a a convention adopted from older Unix systems.
[131035820070] |For more info see this: http://www.catb.org/~esr/writings/taoup/html/ch10s03.html
[131035820080] |[Source]
[131035830010] |How do I make KDE open mail links to a webmail client in a browser?
[131035830020] |I use webmail,specifically gmail.
[131035830030] |How do I make it so that when I click a mailto: link it opens gmail, in a browser (chromium) with the email address, in KDE?
[131035830040] |bonus points for listing other browsers and other popular webmails
[131035840010] |I found a Mozilla's support page, and a thread in Google's support forum.
[131035840020] |Add up with a little guessing of my own I have something that may work.
[131035840030] |Just follow the following instruction for KDE (taken from Mozilla), with your "email client" being xdg-open "https://mail.google.com/mail?extsrc=mailto&url=$s"
[131035840040] |Open the KDE Control Center by clicking on K and selecting Control Center.
[131035840050] |In the Control Center window, click to expand KDE Components.
[131035840060] |Click to select Component Chooser.
[131035840070] |Click to select Email Client.
[131035840080] |Click to select the Use a different email client radio button.
[131035840090] |Type the full path to your e-mail client (e.g. /usr/bin/thunderbird).
[131035840100] |Click Apply to close the Control Center window and save your changes.
[131035840110] |I don't have KDE so I cannot test, but executing xdg-open "https://mail.google.com/mail?extsrc=mailto&url=$s"
in a terminal does open a browser with Gmail.
[131035840120] |Tell me if it works :)
[131035850010] |Follow the same basic instructions in phunhehe's post except instead of xdg-open
you can use kioclient exec https://mail.google.com/mail/?view=cm&fs=1&tf=1&to=%t&su=%s&%u
.
[131035850020] |This will open the same browser that you've set to default in kde.
[131035860010] |Can I work with Sql Server, Office and C# using Linux?
[131035860020] |Hi, I want to start working with linux, and I know I should work in that regularly to improve myself.
[131035860030] |I work with sql server, office, c# at the company. can I install and do my tasks in linux (i.e. red hat)?
[131035870010] |Sadly, SQL Server is a Microsoft product, and I don't think they are stupid enough to support a platform that competes with Windows (i.e their bread and butter).
[131035870020] |Things that use SQL Server are hardly configurable to use another database server, and I don't think you can change it in your company anyway.
[131035870030] |The same thing goes for MS Office (if you meant it).
[131035870040] |There are alternatives for MS Office, the most notable being OpenOffice.org, but no there won't be MS Office on any Linux (unless you plan to run it on WINE, which is quite cumbersome to setup or maintain, and there is no guarantee that it will work).
[131035870050] |C# is a longer story.
[131035870060] |Still it's meant to be used on Windows (ask Microsoft for more information), but there is Mono, the opensource implementation of the .NET framework.
[131035870070] |There have been debates whether a Linux user should use it.
[131035870080] |Technically I can see a major obstacle when everybody else uses Visual Studio on Windows and you try to make it work on Linux.
[131035870090] |I hate to say this but frankly, I don't think you should try to use Linux at your workplace.
[131035870100] |If you want to learn Linux (which I encourage), install a user-friendly distribution (Ubuntu maybe?) on your personal computer is your best shot.
[131035880010] |MS SQL Server is a windows application, which is designed to run on windows.
[131035880020] |Linux is not Windows.
[131035880030] |It is possible that there are some tricks to get it up and running on Linux, but I would not recommend it.
[131035880040] |Same applies to MS Office.
[131035880050] |There is an alternative called OpenOffice.org (or LibreOffice) which is able to read and write MS Office documents.
[131035880060] |If you need SharePoint integration you are lost.
[131035880070] |Mono is a C# and .Net development environment for Linux.
[131035890010] |@phunehehe and @ddeimeke have given good answers already.
[131035890020] |But I disagree with the statements on MS Office, yes there are alternatives (and it appears Go OO was left off), and wine, but I never see anyone mention Crossover Office.
[131035890030] |Crossover is a fork of wine that's commercially backed.
[131035890040] |If you really want to run Microsoft Office for professional use I'd try that.
[131035890050] |This may also allow you to use windows tools for SQL Server.
[131035900010] |You have three options:
[131035900020] |1) Emulation (Wine, Crossover Linux, Bordeaux)
[131035900030] |2) Virtualization (VMware Player or VMware Workstation, Parallels Desktop, Oracle Virtualbox)
[131035900040] |3) Dual Boot
[131035900050] |For C# development on Linux, Mono Project is the way to go.
[131035900060] |You can develop in MonoDevelop IDE and connect to SQL Server hosted in a virtual machine using SQL Client (for more info see: Mono/ADO.NET, Mono/ODBC, Mono/Database Access)
[131035900070] |For more information about Mono have a look at the Start page: http://mono-project.com/Start and Mono FAQ Technical, Mono FAQ General, Mono ASP.NET FAQ, Mono WinForms FAQ, Mono Security FAQ
[131035900080] |Also see their Plans and Roadmap
[131035900090] |Thanks to the Mono project you can even build apps with C# for Apple devices using Monotouch or for Android using Monodroid.
[131035900100] |Also if you want to have the latest version of Mono and tools I recommend using openSUSE because thats the first place where you'll find the latest updates, Mono being a project backed by Novell which is the company that also sponsors openSUSE distribution.
[131035900110] |EDIT: (Completing the Office part of the question)
[131035900120] |// Office suites //
[131035900130] |1) IBM Lotus Symphony -> http://symphony.lotus.com/software/lotus/symphony/home.nsf/home
[131035900140] |2) Oracle OpenOffice -> http://www.oracle.com/us/products/applications/open-office/index.html
[131035900150] |3) OpenOffice.org -> http://www.openoffice.org/
[131035900160] |4) GNOME Office -> http://live.gnome.org/GnomeOffice
[131035900170] |5) Go-oo.org -> http://go-oo.org/
[131035900180] |6) SoftMaker Office -> http://www.softmaker.com/english/ofl_en.htm
[131035900190] |7) KOffice -> http://www.koffice.org/
[131035900200] |// Online Office suites //
[131035900210] |0) Microsoft Office Online -> http://www.officelive.com/en-us/
[131035900220] |1) Google Apps -> http://docs.google.com/
[131035900230] |2) Zoho -> http://www.zoho.com/
[131035900240] |3) ThinkFree -> http://thinkfree.com
[131035900250] |4) Live-Documents -> http://www.live-documents.com/
[131035900260] |5) Ajax13 -> http://us.ajax13.com/en/
[131035900270] |6) ContactOffice -> http://www.contactoffice.com/
[131035900280] |7) FengOffice -> http://www.fengoffice.com/web/
[131035900290] |8) Zimbra -> http://www.zimbra.com/
[131035910010] |Is it safe to disable OOM killer in a web server/reverse proxy?
[131035910020] |I have Linux machine dedicated to serving static contents and PHP pages with Apache.
[131035910030] |Apache also work as a reverse proxy in a subdomain.
[131035910040] |I moved the PostgreSQL database to another Linux machine.
[131035910050] |Is it safe to disable the OOM killer in the kernel?
[131035920010] |Probably not.
[131035920020] |If the OOM killer is running then it is likely that the OOM killer needs to be run to avoid the machine simple grinding to a halt as nothing, even the kernel, can allocate new memory if needed.
[131035920030] |The OOM killer exists because it is generally better to have some services fall over due to the killer than the whole machine to fall off the face of the 'net.
[131035920040] |If you see the OOM killer in action with any regularity then you should either reconfigure the services on the machine to use less RAM, or you may need to add more RAM to the machine.
[131035930010] |How can I create a file that just contains a binary number?
[131035930020] |I would like to create a file that just contains a binary number.
[131035930030] |I think that touch
can be used to create an empty file, but is there any way I can fill it with a binary number e.g. 10
(ten)?
[131035930040] |And how can I validate that the file contains the binary value of ten?
[131035930050] |See also How can I check the Base64 value for an integer?
[131035940010] |Convert the number to hex (in this case A
) and then do:
[131035950010] |Hard disk labels
[131035950020] |How do I add a label to a partition on a disk, and how do I then mount it by label (manually and via fstab
)?
[131035950030] |note: this is an external hard drive
[131035960010] |If your partition is ext2,ext3 or ext4, you can use the e2label command to set the label:
[131035960020] |after you have set the label to, say, "data" you can add a line in /etc/fstab like this one
[131035960030] |then you just need to say mount /mnt/data
.
[131035960040] |If you don't want to modify fstab you can use mount's -L option to specify the label:
[131035970010] |How can I record the sound output with gtk-recordmydesktop?
[131035970020] |I am using gtk-recordmydesktop to record the video output to my desktop.
[131035970030] |However, the videos have no sound.
[131035970040] |All the tutorials I found regarding this involved getting sound recorded from a microphone, while I am interested in getting the sound output recorded.
[131035970050] |How can I do this?
[131035970060] |The official FAQ says "The solution is in your mixer's settings.
[131035970070] |Keep playing with it ;)." which doesn't clarify anything.
[131035970080] |How can I get the sound output recorded, while being able to hear it myself also?
[131035980010] |If you use Pulseaudio, there is a howto in the Ubuntu Wiki.
[131035980020] |It basically boils down to using the pulse audio mixer, to re-route the sound from its source to audacity where it is saved instead of the default output (which would be you speakers).
[131035990010] |I managed to get it going with the steps on the Ubuntu Forums, for clarity here is what I did:
[131035990020] |sudo apt-get install gtk-recordmydesktop pavucontrol
[131035990030] |Opened the Pulse Audio Volume Control dialog: Applications >Sound &Video >PulseAudio Volume Control
[131035990040] |Opened gtk-recordmydesktop
[131035990050] |In gtk-rmd start a recording
[131035990060] |In Volume Control goto the Recording tab and change the recordmydesktop entry to 'Monitor of '
[131035990070] |This is what seems to have worked for me.
[131036000010] |How can I have activity in a folder logged?
[131036000020] |I have a folder shared with others through Dropbox.
[131036000030] |When a file is added to this folder, I get a system tray notification in KDE --- but of course if I'm not at my computer, I wouldn't see the notification.
[131036000040] |Is there a way to automatically log any changes within a folder (especially file creation), and/or automatically run a bash script to, say, send an email to myself as a more durable "alert"?
[131036000050] |A google search turned up incron ...
[131036000060] |It sounds about right.
[131036000070] |Has anyone used this software?
[131036000080] |Thanks.
[131036010010] |It's a pretty straightforward usecase of inotify, unfortunately that's a programming API, not a user utility.
[131036020010] |You probably want inoticoming, which is a user space command that uses the inotify framework.
[131036020020] |You can use it to watch a directory and execute the script of your choice, which can then do anything you want.
[131036020030] |I've used this extensively for monitoring directories for file activity and it works well.
[131036030010] |How to change the name of a vim buffer
[131036030020] |Is it possible to change the name of a buffer in vim?
[131036030030] |Specifically, I'm using Conque Shell to open shells in vim (each shell is in a buffer) and with multiple shells, I see:
[131036030040] |in my buffer list.
[131036030050] |I would like to rename these buffers with more meaningful names (e.g., "mercurial" instead of "bash - 2").
[131036030060] |Is it possible?
[131036040010] |Not very surprisingly, :f changes the buffer name.
[131036040020] |Is that what you want?
[131036050010] |Stop cron script from destroying my mirrorlist with invalid data.
[131036050020] |I have to following cron script that runs daily.
[131036050030] |As you should be able to see from the code, it outputs the results from reflector
to /etc/pacman.d/mirrorlist
.
[131036050040] |Sometimes, reflector
outputs a empty file and thus a invalid mirrorlist is created.
[131036050050] |How can I modify the script above to only write to /etc/pacman.d/mirrorlist
IF there is valid output from relfector
?
[131036060010] |It's a good idea to first accumulate the data, then move it into place.
[131036060020] |That way the target file will always be valid, even while the data accumulator program is running.
[131036060030] |If reflector
does not properly report errors by returning a nonzero status, add your own validation test before the mv
command, for example test -s "$target.tmp"
to test that the file is not empty.
[131036060040] |If you want to keep a backup of the old version, add ln -f -- "$target" "$target.old" || true
before the mv
command.
[131036070010] |VPN issues on web browsing
[131036070020] |I'm using Ubuntu Maverick and this is the setup:
[131036070030] |Our router connects us to the company's VPN through which we can access some internal websites.
[131036070040] |I have to connect also to a customer's VPN in order to use remote desktop and their websites daily.
[131036070050] |This customer has a web interface to connect to its VPN, it launches a Java App which signs us in and lets us use the services on their network.
[131036070060] |When I do this, I loose access to my company's VPN services (websites) in all browsers.
[131036070070] |This doesn't happen in the rest of the (Windows) boxes, and I'm the only one using GNU/Linux on the office.
[131036070080] |Right now, I log out of the customer's VPN to access the company's services, but I'm sure there's some solution so that I can use both VPNs together.
[131036070090] |Edit: The customer's VPN is used via a Juniper Networks login (a Java Client), and I get automatically connected into the company's VPN through the router.
[131036080010] |At least one of the VPN systems are probably set up to Full Tunnelling rather than Split Tunnelling.
[131036080020] |Full tunnelling means route 0.0.0.0/0 (everything) via the VPN tunnel, and split tunnelling means route only the organization's specific IP addresses.
[131036080030] |Check your route table with "ip route list".
[131036080040] |In order to circumvene the full vs split configuration set up by the VPN administrator, you probably need to wrap your own VPN client.
[131036080050] |Theoretically not very hard, since most of these SSL-VPN solutions just wrap a PPP interface with SSL and funnel it to port 443 on the gateway, but in practice it can get tricky.
[131036080060] |I've done it a couple of times, and two things that will really bug you is that the standard PPPD daemon on Linux is a bitch (a user mode executable running on top of TUN/TAP would have been a beautiful blessing, pppd is a mess), and you usually need to touch some very specific URLs on the SSL-VPN gateway before it will let you connect a tunnel.
[131036080070] |Such as, authenticate with web forms and get various cookies, etc.
[131036090010] |How to install Adobe Acrobat Reader in Debian?
[131036090020] |I've got deb http://debian-multimedia.org squeeze main
in "/etc/apt/sources.list", but wajig update &&wajig install acroread
results in:
[131036090030] |What’s happening?
[131036090040] |Are there alternatives repos?
[131036100010] |The problem is that you have just added main
to your sources.list.
[131036100020] |I believe that acroread
is in the non-free part of the respository, so you will want to add:
[131036100030] |to /etc/apt/sources.list
, and then update and install.
[131036100040] |Traditionally, even in third-party repos, main
only includes files that conform to the Debian Free Software Guidelines.
[131036100050] |While non-free
contains
[131036100060] |Packages [that] have some onerous license condition restricting use or redistribution of the software.
[131036100070] |(from http://www.debian.org/distrib/packages)
[131036110010] |Difference between environment variables and exported environment variables in bash
[131036110020] |Bash seems to differentiate between variables which have been exported and those which have not.
[131036110030] |example:
[131036110040] |I know that set
is a bash builtin and env
is not.
[131036110050] |What are the differences between variables which are exported and those which are not?
[131036120010] |Exported variables are carried into the environment of processes started by the shell that exported them, while non-exported variables are local to the current process only.
[131036120020] |From the export
man page:
[131036120030] |The shell shall give the export attribute to the variables corresponding to the specified names, which shall cause them to be in the environment of subsequently executed commands.
[131036120040] |set
outputs the current environment, which includes any local non-exported variables. env
is used to launch programs in a new environment, and with no arguments will output what that new environment would be.
[131036120050] |Since env
is creating a new environment, only exported variables are brought through, as is the case for any program launched from that shell.
[131036120060] |For example, spawning a second shell within the first (I used $$
to represent prompts in the inner shell):
[131036120070] |Edited to show that it's the variable itself that's exported, not the value.
[131036120080] |Once you export foo
, foo
becomes a global variable and shows up in subsequent environments, even if changed later:
[131036130010] |How to print all environment variables defined (but not necessarily exported) in bash
[131036130020] |The bash builtin command set
, if invoked without arguments, will print all environment variables, but also all defined functions. this makes the output unusable for humans and difficult to grep
.
[131036130030] |How can i make the bash builtin command set
print only variables and not functions?
[131036130040] |Are there other command which print only the environment variables, without the functions?
[131036130050] |Note: bash differentiates between environment variables which are exported and those which are not. see here http://unix.stackexchange.com/questions/3507/
[131036140010] |I am unsure how one can make set
print only variables.
[131036140020] |However by looking at the output of set
, I was able to come up with the following that seems to grab just variables:
[131036140030] |Basically, I am looking for lines that start with letters, numbers, or punctuation followed by a "=".
[131036140040] |From the output that I saw, this grabs all of the variables; however, I doubt that this is very portable.
[131036140050] |If, as your title suggests, you want to subtract from this list the variables that are exported and thus get a list of non-exported variables, then you could do something like this
[131036140060] |To break this down a bit, here is what it does:
[131036140070] |set | grep "^\([[:alnum:]]\|[[:punct:]]\)\+=" | sort >./setvars
: This hopefully gets all of the variables (as discussed before), sorts them, and sticks the result in a file.
[131036140080] |&&env | sort
: After the previous command is complete, we are going to call env
and sort its output.
[131036140090] || comm -23 ./setvars -
: Finally, we pipe the sorted output of env
into comm
and use the -23
option to print the lines that are unique to the first argument, in this case the lines unique to our output from set.
[131036140100] |When you are done you might want to cleanup the temp file that it created with the command rm ./setvars
[131036150010] |The typeset
builtin strangely has an option to show functions only (-f
) but not to show parameters only. Fortunately, typeset
(or set
) displays all parameters before all functions, function and parameter names cannot contain newlines or equal signs, and newlines in parameter values are quoted (they appear as \n
).
[131036150020] |So you can stop at the first line that doesn't contain an equal sign:
[131036150030] |This only prints the parameter names; if you want the values, change print $1
to print $0
.
[131036150040] |Note that this prints all parameter names (parameters and variables are used synonymously here), not just environment variables (“environment variable” is synonymous with “exported parameters”).
[131036150050] |Note also that this assumes there is no environment variable with a name that doesn't match bash's constraints on parameter names.
[131036150060] |Such variables cannot be created inside bash but can have been inherited from the environment when bash starts:
[131036160010] |Here is a solution, inspired by the previous answers:
[131036160020] |breakdown:
[131036160030] |declare
prints every defined variable (exported or not) and function.
[131036160040] |declare -f
prints only functions.
[131036160050] |comm -3
will remove all lines common to both.
[131036160060] |In effect this will remove the functions, leaving only the variables.
[131036160070] |To only print variables which are not exported:
[131036160080] |Another solution:
[131036160090] |This will only print the variables, but with some ugly attributes.
[131036160100] |You can cut the attributes away using... cut:
[131036160110] |The output is usable.
[131036160120] |One downside is that the value of IFS is interpreted instead of displayed raw.
[131036160130] |compare:
[131036170010] |Does verifying packages enhance security and/or stability of a system?
[131036170020] |RedHat/CentOS and Ubuntu both provide packages to verify the contents of a package.
[131036170030] |CentOS uses rpm -V. Ubuntu uses debsums.
[131036170040] |I've spoken to admins who run these tools to verify the packages on their systems.
[131036170050] |Do these tools enhance the security of a system, or do they provide a false sense of security?
[131036170060] |Likewise, do these tools increase the stability of a system, since they can help detect problems after filesystem corruption?
[131036180010] |Well they are just there to make sure that your package (downloaded or copied from somewhere) is not corrupted.
[131036180020] |After you first download or copy the package it is helpful to prevent you from installing a broken package.
[131036180030] |A broken package may work or may not work (poor stability).
[131036180040] |I don't think we should care about security when the system is not even stable.
[131036180050] |After the package has been installed, you often don't need to run the verification again, unless there has been a disk failure or a power outage (or unless you are really paranoid about it).
[131036180060] |In case of such incidents, the file system can be damaged, leading to broken packages.
[131036190010] |In the case of security it will only catch a limited number of issues, when the person that broke into your server and replaced/changed some files is rather inexperienced, as any "decent" rootkit will make sure that a tool like debsums will "see" the original files and no "alarm" will ring.
[131036190020] |It might be slightly more useful to detect filesystem corruption.
[131036190030] |In any case, it certainly does no harm, as long as you are aware of the limitations.
[131036200010] |I think the short answer to Do these tools enhance the security of a system? is yes.
[131036200020] |Keep in mind this is only an enhancement, there are many aspects of security and it is a long hard road to understanding even an overview of security.
[131036200030] |Such tools can be used to verify the integrity of your operating system, and this is important.
[131036200040] |Sure there will be things that can compromise or fool these tools, but imagine if you didn't have them, how would you verify your binaries if you suspected they'd been compromised?
[131036200050] |Obviously having corrupted binaries due to impending drive failure is a problem to.
[131036210010] |First keep in mind that distributions, like e.g. Debian, Ubuntu etc., use cryptographically signed packages, i.e. starting with a trusted install medium, the public key crypto assures that the packages from the official repositories are not modified on the way to your computer.
[131036210020] |For verifying the checksums of installed packages there are a few use cases:
[131036210030] |Data safety - to detect hardware failures for example, the ksplice blog features such a war story
[131036210040] |Security - for example, if your /usr is mounted from a file server and you suspect a manipulation on that server, but your machine is not compromised
[131036210050] |Of course, if an attacker is root on your system then verifying checksums does not really provides an advantage because he can manipulate the checking or hide the chances to files as well ...
[131036220010] |Think about it increasing the knowledge that a hacker needs to have to be able to hack your system successfully without being detected.
[131036220020] |In this case, its trivial, and if someone knew enough to gain access to your systems, then they would be able to detect and subvert RPM checks
[131036220030] |These kind of system checks also have a deterent effect that is cumulative.
[131036220040] |If its easier to hack another system, for the same reward, then some hackers will go elsewhere.
[131036230010] |Tracing the workings of a function in Emacs
[131036230020] |I am learning emacs lisp.
[131036230030] |One of the things I was playing with was setting variables.
[131036230040] |So I go and set fill-column to 50 (setq fill-column 60).
[131036230050] |After this evaluating fill-column does give me 60, but the function fill-paragraph ignores this.
[131036230060] |How would I go about figuring out why this happens?
[131036230070] |Thanks.
[131036230080] |Best, Bart
[131036240010] |To instrument an Elisp function for debugging, load its source code, put point inside the function, and type M-x edebug-defun
.
[131036240020] |Then, when the function is executed, the debugger will activate automatically.
[131036240030] |In your particular case, fill-column
becomes buffer local when set (see make-variable-buffer-local).
[131036240040] |You probably set it in the wrong buffer.
[131036250010] |How to grep standard error stream (stderr)?
[131036250020] |I am using ffmpeg to get the meta info of an audio clip.
[131036250030] |But I am unable to grep it.
[131036250040] |I checked, this ffmpeg output is directed to stderr.
[131036250050] |So I think that grep is unable to read error stream to catch matching lines.
[131036250060] |How can we enable grep to read error stream?
[131036250070] |Using nixCraft link, I redirected standard error stream to standard output stream, then grep worked.
[131036250080] |But what if we do not want to redirect stderr to stdout?
[131036260010] |Hi Andrew,
[131036260020] |When you pipe the output of one command to another (using |
), you are only redirecting standard output.
[131036260030] |So that should explain why
[131036260040] |doesn't output what you wanted (it does work, though).
[131036260050] |If you don't want to redirect error output to standard output you can redirect error output to a file, then grep it later
[131036270010] |This is similar to phunehehe's "temp file trick", but uses a named pipe instead, allowing you to get results slightly closer to when they are output, which can be handy for long-running commands:
[131036270020] |In this construction, stderr will be directed to the pipe named "mypipe".
[131036270030] |Since grep
has been called with a file argument, it won't look to STDIN for its input.
[131036270040] |Unfortunately, you will still have to clean up that named pipe once you are done.
[131036270050] |If you are using Bash 4, there is a shortcut syntax for command1 2>&1 | command2
, which is command1 |& command2
.
[131036270060] |However, I believe that this is purely a syntax shortcut, you are still redirecting STDERR to STDOUT.
[131036280010] |See below for the script used in these tests.
[131036280020] |Grep can only operate on stdin, so therefore you must convert the stderr stream in a form that Grep can parse.
[131036280030] |Normally, stdout and stderr are both printed to your screen:
[131036280040] |To hide stdout, but still print stdout do this:
[131036280050] |But grep won't operate on stderr!
[131036280060] |You would expect the following command to suppress lines which contain 'err', but it does not.
[131036280070] |Here's the solution.
[131036280080] |The following Bash syntax will hide output to stdout, but will still show stderr.
[131036280090] |First we pipe stdout to /dev/null, then we convert stderr to stdout, because Unix pipes will only operate on stdout.
[131036280100] |You can still grep the text.
[131036280110] |(Note that the above command is different then ./command >/dev/null 2>&1
, which is a very common command).
[131036280120] |Here's the script used for testing.
[131036280130] |This prints one line to stdout and one line to stderr:
[131036290010] |None of the usual shells (even zsh) permits pipes other than from stdout to stdin.
[131036290020] |But all Bourne-style shells support file descriptor reassignment (as in 1>&2
).
[131036290030] |So you can temporarily divert stdout to fd 3 and stderr to stdout, and later put fd 3 back onto stdout.
[131036290040] |If stuff
produces some output on stdout and some output on stderr, and you want to apply filter
on the error output leaving the standard output untouched, you can use { stuff 2>&1 1>&3 | filter 1>&2; } 3>&1
.
[131036300010] |If you're using bash
why not employ anonymous pipes, in essence shorthand for what phunehehe said:
[131036300020] |ffmpeg -i 01-Daemon.mp3 2> >(grep -i Duration)
[131036310010] |Gilles and Stefan Lasiewski's answers are both good, but this way is simpler:
[131036310020] |I am assuming you don't want ffmpeg's
stdout printed.
[131036310030] |How it works:
[131036310040] |pipes first
[131036310050] |ffmpeg and grep are started, with ffmpeg's stdout going to grep's stdin
[131036310060] |redirections next, left to right
[131036310070] |ffmpeg's stderr is set to whatever its stdout is
[131036310080] |ffmpeg's stdout is set to /dev/null
[131036320010] |Is there a simple flag to prevent installing X and anything that depends on it via ports?
[131036320020] |I'm running FreeBSD in a small VMWare image and I want to keep it headless.
[131036320030] |Is there a setting someplace that will guarantee I never pull in X as a dependency, or will I have to rely solely on eternal vigilance?
[131036320040] |Thanks.
[131036320050] |See also: the same basic question on SuperUser
[131036330010] |You need to include -DWITHOUT_X11
as make argument.
[131036330020] |Depending on how you install ports you can 'include' it (sorry - I cannot find the details right now).
[131036330030] |Alternatively there is a ports-mgmt/portconf package in which you can specify WITHOUT_X11, if I understand correctly, in such manner:
[131036330040] |Please note that it will work only with optional X11 dependencies - installing KDE will still install X11.
[131036340010] |According to the mailing list, you should set WITHOUT_X11=yes
in /etc/make.conf
.
[131036350010] |Migrating from tp_smapi to 'normal' ACPI support in 2.6.36
[131036350020] |I used zen kernel patchset for a long time which included tp_smapi patch.
[131036350030] |Recently in zen-stable tp_smapi was removed as "We no longer need tp_smapi as of 2.6.36 - the in-kernel thinkpad acpi support is better.".
[131036350040] |How to port following code to in-kernel thinkpad acpi:
[131036360010] |It might not be possible.
[131036360020] |At least there is nothing in the documentation of thinkpad-acpi, nothing in the release notes, nothing in the thinkpad-acpi thinkwiki page and no mentioning of tp_smapi being obsolete in the tp_smapi thinkwiki page.
[131036370010] |touch: cannot touch `foo': No such file or directory
[131036370020] |What could cause touch to fail with this error message?
[131036370030] |Note that an error due to incorrect permissions looks different:
[131036380010] |Hm, homework question?
[131036380020] |Anyway, following sequence causes this error message:
[131036380030] |In another terminal:
[131036380040] |In the previous terminal:
[131036390010] |KDE toolbar background messed up
[131036390020] |At one day (it has been for a while now) the background of KDE main toolbar started looking really strange: screenshot (it looks to me as the alpha channel of the background image is missing).
[131036390030] |Debugging I've done:
[131036390040] |When I change the workspace theme in KDE System Preferences to something else than "Air" and back again, the problem disappears.
[131036390050] |When I choose any some other theme than the default "Air" there is no problem, except that I don't like these other themes as much.
[131036390060] |I created a new clean user account and logged in with KDE - the same problem appeared - so it's not that I have some outdated files in my .kde4 config directory.
[131036390070] |I have tried reinstalling several components of KDE that could effect this, but without success - might have not been the right ones, haven't done a complete reinstall of KDE.
[131036390080] |It probably happened after some update of my Gentoo system, but I may not remember correctly, as I have looked for a solution to this for quite some time now.
[131036390090] |If only I knew were to look for to find this particular image file and see if there is something wrong with it... but I've grepped and located and finded in /usr/share quite a bit and I don't seem to understand where are those KDE workspace theme files located.
[131036390100] |Using KDE 4.4.5 with the default "Air" theme.
[131036400010] |I've had this exact problem on all my newer KDE installations.
[131036400020] |It seems to be a bug in KDE though I've not seen anything specific on it.
[131036400030] |A quick fix that I've used is to change the hight of the toolbar.
[131036400040] |It seems that if you change the hight to something smaller, the problem disappears forever.
[131036400050] |It's just at the default height that it has problems.
[131036410010] |/etc/rc.d vs /etc/init.d
[131036410020] |Is ubuntu's /etc/init.d
directory exactly equivalent (functionally) to what I presume to be the more standard /etc/rc.d/
(at least on arch)?
[131036410030] |Is there any particular reason canonical used init.d instead of rc.d for startup scripts?
[131036420010] |Ubuntu uses /etc/init.d
to store SysVinit scripts because Ubuntu is based on Debian and that's what Debian uses.
[131036420020] |Red Hat uses /etc/rc.d/init.d
.
[131036420030] |I forget what Slackware uses.
[131036420040] |There just isn't a standard location.
[131036420050] |Ubuntu is in the process of switching from SysVinit to Upstart, which uses configuration files in /etc/init
.
[131036430010] |/etc/init.d was the old historical location for SVR4. I forgot why redhat added the /etc/rc.d/ level.
[131036430020] |I think to isolate things onto rc.d, but then needed to add a bunch of symlinks anyway for backwards compatibility.
[131036430030] |So there is /etc/init.d in redhat, just it symlinks elsewhere.
[131036430040] |So the standard location is /etc/init.d, though it may be a symlink not a real directory.
[131036430050] |There were some really old Linux distros that copied BSD with /etc/rc.local but pretty much no one uses that anymore.
[131036440010] |Slackware still uses /etc/rc.d
[131036440020] |FreeBSD uses /etc/rc.d and /usr/local/etc/rc.d
[131036450010] |Expand KDE activities concept to the shell
[131036450020] |Sometimes, I use KDE, and one of the things that I like the most in KDE 4 is the activity concept.
[131036450030] |At work, it is very useful because I often work on several different projects during one day.
[131036450040] |Switching to another activity enables me to change the widgets, so that I can have access to folders related to the current project, for instance.
[131036450050] |I've decided to use this concept in the shell, so I have coded a small bash function called "switch", which sets aliases useful for the current project, e.g. alias cdwww=~/public_html/current_project/www
, and so on.
[131036450060] |My question is : Is there a way I can synchronise KDE activities with shell activities, that is calling 'switch myproj' on every opened terminal when switching to activity 'myproj' through KDE and vice versa (bonus question)?
[131036450070] |Another question : how do I make my newly created aliases work in all consoles?
[131036450080] |Is there a way I can detect every opened terminal in konsole or in gnome-terminal and execute my function in it?
[131036450090] |EDIT: here is the switch function, located at the end of my .bashrc file, feel free to comment:
[131036450100] |As per Gilles' answer, here is what I have got:
[131036460010] |Controlling KDE activities via dbus
[131036460020] |KDE can be controlled from the command line with qdbus
.
[131036460030] |The general syntax is qdbus COMPONENT PATH METHOD ARGUMENT1...
where COMPONENT
is typically something like org.freedesktop.Foo
or org.kde.Bar
, PATH
denotes a class exposed by the component, METHOD is the name of a particular action in that class, and there may be further arguments depending on the method.
[131036460040] |Here are commands for KDE ≥4.5 to list activities, to get the current activity, and to set the current activity.
[131036460050] |Finding out what dbus can do
[131036460060] |KDE's dbus documentation is very poor.
[131036460070] |Each class is minimally documented, e.g. Activity, DesktopCorona).
[131036460080] |But you'll probably have to experiment and perhaps read the source (there are links in the API documentation pages) to find out what is available.
[131036460090] |If you type qdbus
with up to two arguments, it will list the possibilities for the next argument.
[131036460100] |The following shell snippet lists all available Qt-dbus methods:
[131036460110] |Another way to explore the dbus tree is qdbusviewer
in the Qt development tools.
[131036460120] |There is also a Python qt-dbus interface as part of PyQt.
[131036460130] |Getting the shell to react
[131036460140] |To make a shell react to external events, the best you can reasonably do is make it check something before displaying a prompt.
[131036460150] |Bash runs $PROMPT_COMMAND
before displaying a prompt, and zsh executes the precmd
function.
[131036460160] |So you can look up the current KDE activity and do something if it's changed from the last time you looked.
[131036470010] |Strip // Comments From Files
[131036470020] |What's the best way to strip all code comments in a given directory?
[131036470030] |I'd like to strip out all // ... EOL
comments, and /* blah \*/
(or /** ... \*/
) comments as well.
[131036470040] |This is a PHP project, and I'd like to go a little further than what is outlined below, yet for security purposes rather than efficiency.
[131036470050] |Zend Frameword: Documentation Class Loading - Strip require_once calls with find and set.
[131036470060] |Any help would be greatly appreciated.
[131036480010] |This will do it in Perl:
[131036480020] |These commands will not delete comments with multiple lines, like
[131036480030] |It is possible to do this, but multi line regex are way more difficult.
[131036480040] |There will also be solutions for awk, sed, python, ...
[131036480050] |But this should also do it.
[131036490010] |A quick google search returns a similar question at stackoverflow.
[131036500010] |I'm wondering if you could use phplint to do this.
[131036510010] |Make terminal text color different when in ssh session
[131036510020] |Is there a way to make my terminal (konsole) display different text colors when I'm in an ssh session WITHOUT modifying the remote host's color configuration?
[131036510030] |Like, maybe automatically switching to a different profile?
[131036510040] |Konsole can use these different "profiles"
[131036510050] |I want to basically change to a different profile when in an ssh session.
[131036510060] |So that instead of the default being green on black, change to black on white or something.
[131036510070] |It doesn't necessarily have to use this profile setting.
[131036510080] |But if xterm or something has a setting to do this that would work too.
[131036510090] |The idea is to work on ANY ssh session, not just particular sessions with particular machines.
[131036520010] |One possibility, if the terminal supports it, is to use the terminal's Change Color escape sequence.
[131036520020] |Apparently konsole doesn't support it though.
[131036520030] |From the Xterm control sequence document (ctlseqs):
[131036520040] |OSC Ps ; Pt BEL
[131036520050] |Ps = 4 ; c ; spec -> Change Color Number c to the color specified by spec, i.e., a name or RGB specification as per XParseColor.
[131036520060] |Any number of c name pairs may be given.
[131036520070] |The color numbers correspond to the ANSI colors 0-7, their bright versions 8-15, and if supported, the remainder of the 88-color or 256-color table.
[131036520080] |What this means is that the control sequence \e]4;NUMBER;VALUE\a
will change the appearance of color NUMBER.
[131036520090] |NUMBER is a color number (0–7 for the eight basic colors, 8–15 for the bright versions, and more if the terminal supports more colors).
[131036520100] |The VALUE is something that XParseColor understands, such as an RGB specification #123456
or an X color name (look for rgb.txt
on your machine, or use xcolors
to see the possibilities).
[131036520110] |For example, the following command changes the basic blue color (color 4) and its bright variant (4+8) to contain some green:
[131036520120] |Note that this changes every character currently displayed in this particular color in the window.
[131036520130] |There is no way to change the meaning of a color only for subsequently displayed characters; if that's what you want, you'll have to configure each program displaying inside the terminal to use different color numbers when talking to the terminal.
[131036520140] |Having this happen exactly when you're typing in an ssh session will be very complicated, but handling the common cases is reasonably simple: use a wrapper around ssh that changes the color palette, then runs ssh, and finally changes the color palette back.
[131036520150] |Examples of cases this won't handle are suspending the ssh process and running ssh inside screen or tmux.
[131036530010] |What encryption does vim -x use?
[131036530020] |I read man vim
, :h -x
and :h encryption
none of this actually says what algorithm it's encrypted with.
[131036540010] |I know that they added support for blowfish encryption in 7.3.
[131036540020] |Other than that it's zip.
[131036540030] |If you're using 7.3 go:
[131036540040] |to see what method you're using.
[131036550010] |:h 'cryptmethod'
says that PkZip and Blowfish (new in Vim 7.3) are possible encryption methods.
[131036550020] |A look around FEAT_CRYPT
in vim/src/misc2.c confirms it.
[131036550030] |The weak encryption method is documented in PKWARE's zip file format documentation, and the new strong encryption is documented on Bruce Schneier's Blowfish page.
[131036560010] |Creating a UNIX account which only executes one command
[131036560020] |Is there a way to create a user account in Solaris which allows the users to run one command only?
[131036560030] |No login shell or anything else.
[131036560040] |I could possibly do it with /usr/bin/false in the /etc/passwd file and just get the user to ssh but is there a nicer way to do it?
[131036560050] |Thanks.
[131036570010] |You could set the shell of that user to a script just running the command you want to allow: Whenever the user logs in, the command is run, then the user is logged out.
[131036570020] |And since there's no "full shell" you don't have to deal with the user trying funky stuff ;)
[131036580010] |You could used a forced command if the users can only connect through ssh.
[131036580020] |Essentially, whenever the user connects through ssh with a certain key and a certain username, you force him to execute a command (or a script or) you determined in the .ssh/authorized_keys.
[131036580030] |Commands issued by the users will be ignored.
[131036580040] |For example:
[131036590010] |What does 'mergemaster' do that 'make distribution' doesn't?
[131036590020] |After calling make installworld
(or make world
), there are two ways of updating source files in the new world: calling mergemaster -p
or make distribution
.
[131036590030] |I know that megermaster
calls make distribution
but what else does it do and why would I call it instead of just make distribution
?
[131036600010] |make distribution
just installs new configuration files, while mergemaster
walks interactively over all config files and asks you which ones you want (and intelligently upgrades files you never edited in the first place if possible).
[131036600020] |It even gives you the option to merge them as needed.
[131036600030] |Basically, it automates the process of installing updated config files, doing all the diffs automatically and giving you a nicer way of merging the old and new config trees.
[131036600040] |If you're curious how it works, mergemaster
is just a shell script.
[131036610010] |Difference between wget versions
[131036610020] |I have problem with wget behavior for 1.10 and later versions.
[131036610030] |I am using wget to download a report from my app.
[131036610040] |version 1.10.2:
[131036610050] |version 1.11.4:
[131036610060] |Do I need to add some parameter for version 1.11 and later?
[131036610070] |Update - Wireshark dump
[131036610080] |version 1.10.2:
[131036610090] |version 1.11.4:
[131036620010] |I just checked with wget 1.12 here on my machine and your parameters are fine.
[131036620020] |The output looks as if the server sends different data to both processes though:
[131036620030] |The first directly gives you "200" (as in everything is fine) the second sends a "302" meaning redirect.
[131036620040] |The page it redirects to (http://192.168.1.222:8080/myapp/spring_security_login) doesn't exist though.
[131036620050] |I'd look at the server app first, because basic HTTP auth hasn't afaik changed in wget in quite a while (all my scripts still run and some of them are years old)
[131036630010] |Just add --auth-no-challenge
parameter.
[131036630020] |If this option is given, Wget will send Basic HTTP authentication information (plaintext username and password) for all requests, just like Wget 1.10.2 and prior did by default.
[131036630030] |For details read bug-description
[131036640010] |Resizing a Logical Volume holding a live virt guest
[131036640020] |I've got a Red Hat Enterprise Linux Server release 5.5 (Tikanga).
[131036640030] |That I'm running four virtual guests on.
[131036640040] |In /dev/VolGroup01 I have the following partitions xenvm01 xenvm02 xenvm03
[131036640050] |Then I have snapshots like so vm01_backup vm02_backup vm03_backup
[131036640060] |I'm wondering if there is any way for me to resize (grow) the LV /dev/VolGroup01/xenvm01 without first shutting down the virt.
[131036640070] |But it also seems as if I have to remove the snapshots before I can resize.
[131036650010] |okay growing the lv was rather painless.
[131036650020] |So say I wanted to add 20 GB to the xenvm01 lv. as root I just type 'lvextend -L+20GB /dev/VolGroup01/xenvm01'this will add the 20 GB to the guest but as unallocated space. there is no filesystem and its not partitioned. (working on finding that solution now).
[131036660010] |How to switch between users on one terminal?
[131036660020] |I'd like to log in as a different user without logging out of the current one (on the same terminal).
[131036660030] |How do I do that?
[131036670010] |Generally you use sudo
to launch a new shell as the user you want; the -u
flag lets you specify the username you want:
[131036670020] |There are more circuitous ways if you don't have sudo access, like ssh username@localhost
, but I think sudo
is probably simplest if it's installed and you have permission to use it
[131036680010] |How about using the su
command?
[131036680020] |If you want to log in as root, there's no need to specify username:
[131036680030] |Generally, you can use sudo
to launch a new shell as the user you want; the -u
flag lets you specify the username you want:
[131036680040] |There are more circuitous ways if you don't have sudo access, like ssh username@localhost, but sudo
is probably simplest, provided that it's installed and you have permission to use it.
[131036690010] |How to have cron run a python script as root?
[131036690020] |How can I get cron
to run a python script as root?
[131036690030] |Here is my crontab file:
[131036690040] |Am I doing something wrong?
[131036700010] |If that's root's crontab (edited with sudo crontab -u root -e
or su -c 'crontab -u root -e'
or similar), then ./twitter/twitter.py
will run every hour.
[131036700020] |If this is the system crontab (/etc/crontab
), a sixth field is needed after the asterisks: 0 * * * * root …
.
[131036700030] |I recommend using the root user's crontab and leaving the system crontab to the system.
[131036700040] |./twitter/twitter.py
starts from the current directory.
[131036700050] |Cron can't guess what you want the current directory to be: you never told it.
[131036700060] |Change this to use the absolute path to the script, e.g. /home/paul/scripts/twitter/twitter.py
.
[131036700070] |You'll need to make sure that twitter.py
starts with #!/usr/bin/env python
(I'm assuming it is a Python script) and that python
is in cron's default PATH
(this will depend on your brand of unix; you can be sure that /usr/bin
is in the default PATH
, but if your python
lives elsewhere such as /usr/local/bin
, you may need to add a line like PATH=/usr/local/bin:/bin:/usr/bin
at the top of the crontab).
[131036700080] |Also make sure that the script is executable (chmod +x …/twitter.py
).
[131036710010] |Trying to set up a server/workstation SLED environment.
[131036710020] |I'm considering a number of desktop distros for use in our office.
[131036710030] |It seems like SLED matches what I want more than any other distro, especially home user focused distros.
[131036710040] |In an attempt to learn more about it, I made a SUSE Studio account and got to playing just to learn what I can.
[131036710050] |I have at least an above average amount of Linux experience.
[131036710060] |I run it on my home machines exclusively (running ArchLinux) and manage our Linux server at work just fine.
[131036710070] |My knowledge is a bit lacking in Linux server/workstation interactions, especially when it comes to SUSE (which I haven't used before).
[131036710080] |Now my goal here is threefold:
[131036710090] |All workstations should be running an identical OS with identical software.
[131036710100] |This part seems easy.
[131036710110] |If I make a SUSE Studio appliance for all our machines and use that, mission accomplished.
[131036710120] |I should be able to make system-level changes remotely to all our machines at once (as opposed to SSHing them one by one).
[131036710130] |I haven't played with it much but I've read that YaST is designed to do such things?
[131036710140] |I want user information to be stored on our (non SUSE) server.
[131036710150] |This is where things get especially fuzzy for me.
[131036710160] |If each workstation's software is identical then in theory if the user's home folder is stored on the server, a user could log in from any machine on the network and have all their stuff right there.
[131036710170] |This also greatly simplifies the backup process since all user data is in one spot (/home on the server).
[131036710180] |I don't know if this is a bad idea or not but it seems really cool.
[131036710190] |In SUSE Studio I only saw options to manually add users so I don't know of SLED supports anything like this out of the box or not.
[131036710200] |I'd appreciate if somebody could clear up my confusions here and point me down the right path to have smooth server-workstation integration in our office.
[131036720010] |Installation of the same software is just half the way.
[131036720020] |You have to keep it the same over time (additional software, updates, service packs, ...).
[131036720030] |Therefor you could use some tools from SUSE.
[131036720040] |SMT (Subscription Management Tool) is the free version and ZENworks is the more costly option.
[131036720050] |When I'm right, both need a SUSE server to run on.
[131036720060] |You could use puppet or chef as already mentioned in the comments.
[131036720070] |Another option is a cluster shell or something like this, but this is only a solution for small environments and very little changes in the systems.
[131036720080] |Another way is to use ZENworks from SUSE when you want to spend the money.
[131036720090] |YaST is definitely no option for this.
[131036720100] |The comments already said nearly everything needed.
[131036720110] |One more point, would be automount, to get the user filesystems mounted.
[131036730010] |Display transfer speed when performing "cp" from the command line?
[131036730020] |Is this possible?
[131036740010] |The standard coreutils cp
command doesn't support this.
[131036740020] |There's a Gentoo patch floating around that adds it for different versions, although it's not included in Gentoo anymore for some reason; the version for coreutils 6.10 is in their bugzilla, and I'm sure there are lots of others around.
[131036740030] |If you don't want to patch cp
, you need to use some other command.
[131036740040] |For example, rsync
has a --progress
flag, so you can do:
[131036740050] |If instead of copying you cat the data and then redirect stdout to the destination (i.e. cat source >destination
), then you can use a program that measures pipe throughput and insert it in the middle (cat source | SOME-PROGRAM >destination
); there are a couple mentioned in this related question.
[131036740060] |The one I recommended there was pv (Pipe Viewer):
[131036740070] |If you give it the --rate
flag it will show the transfer rate
[131036750010] |I find that using pv in this manner works well for that purpose
[131036760010] |Example for kernel timer Implementation in Linux
[131036760020] |Could you please give any basic example of a kernel timer (start_ktimer) implementation in Linux?
[131036770010] |What about this text on LWN?
[131036770020] |It describes the responsible struct which should get you started.
[131036780010] |Wiki Page on ArchWiki:
[131036780020] |The pacman package manager is one of the main features of Arch Linux.
[131036790010] |The pacman package manager is one of the main features of Arch Linux
[131036800010] |How to unpack libc6 source on Hardy using debian/rules?
[131036800020] |How do I unpack the libc6 source code on Hardy without building everything?
[131036800030] |I run
[131036800040] |and I get
[131036800050] |What I want is the unpacked and patched source code.
[131036800060] |Using google find this obsolete blog:
[131036800070] |Which says to run:
[131036800080] |But that gives me:
[131036800090] |Besides I want to unpack for amd64 (x86_64) not i686 anyway.
[131036800100] |So what is the super-secret target for unpacking libc6 via debian/rules?
[131036800110] |I do not want to start the build process.
[131036800120] |I not have the space for that.
[131036800130] |Thanks
[131036810010] |Google was hopeless about this.
[131036810020] |Ubuntuforums didn't help.
[131036810030] |Stack Exchange didn't help.
[131036810040] |Picking apart makefiles with unusual names using emacs find-grep iteratively eventually uncovered:
[131036820010] |What do the numbers in a man page mean?
[131036820020] |So, for example, when I type man ls
I see LS(1)
.
[131036820030] |But if I type man apachectl
I see APACHECTL(8)
and if I type man cd
I end up with cd(n)
.
[131036820040] |I'm wondering what the significance of the numbers in the parentheses are, if they have any.
[131036830010] |The number corresponds to what section of the manual that page is from; 1 is user commands, while 8 is sysadmin stuff.
[131036830020] |The man page for man itself (man man
) explains it and lists the standard ones:
[131036830030] |There are certain terms that have different pages in different sections (e.g. printf
as a command appears in section 1, as a stdlib function appears in section 3); in cases like that you can pass the section number to man
before the page name to choose which one you want, or use man -a
to show every matching page in a row:
[131036830040] |You can tell what sections a term falls in with man -k
(equivalent to the apropos
command).
[131036830050] |It will do substring matches too (e.g. it will show sprintf
if you run man -k printf
), so you need to use ^term
to limit it:
[131036840010] |These are section numbers.
[131036840020] |Just type man man
or open konqueror and type man://man and you'll see what are these sections.
[131036850010] |The history of these section numbers goes back to the original Unix Programmer's Manual by Thompson and Ritchie in 1971.
[131036850020] |The original sections were
[131036850030] |Commands
[131036850040] |System calls
[131036850050] |Subroutines
[131036850060] |Special files
[131036850070] |File formats
[131036850080] |User-maintained programs
[131036850090] |Miscellaneous
[131036860010] |Using xargs with input from a file
[131036860020] |Say I have a file with the following
[131036860030] |Now these directly corrospond to (in this case) URL pattern such as http://example.com/persons/bob.tar
, john.tar
, sue.tar
.
[131036860040] |I would like to take these lines and run them through xargs
.
[131036860050] |I don't know what is passed to the command being executed though.
[131036860060] |How do I access the parameter either from the prompt (say I want to simply echo each line like cat file | xargs echo $PARAM
) or from a bash script.
[131036870010] |I think you're asking how to insert the individual lines pulled from xargs
' stdin in the middle of a command, instead of just pasting it on the end always.
[131036870020] |If so, the -I
flag takes a replacement-string
argument; xargs
will then replace replacement-string
in the command with the line read from stdin:
[131036880010] |Michael's answer is right, and should sort out your problem.
[131036880020] |Running
[131036880030] |will download files bob.tar
john.tar.
sue.tar
as expected.
[131036880040] |BUT: Cat is Useless
[131036880050] |rather use:
[131036890010] |You may want to set --delimiter=
/-d
to '\n'
as well.
[131036890020] |On the other hand, if you are just trying to turn each line in the file into a URL,
[131036890030] |will do, and if you want to fetch all of them, just pipe that into | wget -i
.
[131036900010] |another way with shell looping:
[131036900020] |you can also run each iteration in the background by appending &
prior to the last semicolon - for very large downloads this might be handy
[131036910010] |With GNU Parallel you can do:
[131036910020] |Or:
[131036910030] |Watch the intro video for GNU Parallel to learn more: http://www.youtube.com/watch?v=OpaiGYxkSuQ
[131036920010] |This is a more general version of Stefan:s answer but I'm using awk in the middle to prepare the exact "string" that I would like xargs to execute.
[131036920020] |And then xargs is using bash to do the actual "work".
[131036920030] |It is a little bit overkill for this example, but it is a general solution that with some modifications can solve many problems...
[131036930010] |Ubuntu list explicitly installed packages
[131036930020] |In Gentoo there is the file /var/lib/portage/world
that contains packages that I explicitly installed.
[131036930030] |By explicit I mean, packages that I choose, not including anything installed by default, or pulled in by the dependencies.
[131036930040] |Is there a similar file or a command to find that information in Ubuntu?
[131036940010] |Check out /var/log/apt/term.log
[131036950010] |You would want to have a look at this article.
[131036950020] |At the end of the article, there is a python script that should do what you want.
[131036950030] |It was written for (k)ubuntu. but should work for Ubuntu aswell.
[131036960010] |NOTE: In the second section, I included a "wrong path" that I went down because I think it is illustrative.
[131036960020] |If you are just looking for the commands to run, look towards the bottom of that section.
[131036960030] |One way to think about this problem is to break this into three parts:
[131036960040] |How do I get a list of packages not installed as dependencies?
[131036960050] |How do I get a list of the packages installed by default?
[131036960060] |How can I get the difference between these two lists?
[131036960070] |How do I get a list of packages not installed as dependencies?
[131036960080] |The following command seems to work on my system:
[131036960090] |Similar approaches can be found in the links that Gilles posted as a comment to the question.
[131036960100] |Some sources claim that this will only work if you used aptitude to install the packages; however, I almost never use aptitude to install packages and found that this still worked.
[131036960110] |The | sort | uniq
sorts the file and removes duplicates.
[131036960120] |This makes the final step much easier.
[131036960130] |How do I get a list of the packages installed by default?
[131036960140] |This is a bit trickier.
[131036960150] |I initially thought that a good approximation would be all of the packages that are dependencies of the meta-packages ubuntu-minimal, ubuntu-standard, ubuntu-desktop, and the various linux kernel related packages.
[131036960160] |A few results on google searches seemed to use this approach.
[131036960170] |To get a list of these dependencies, I did the following:
[131036960180] |However, this seems to leave out some packages that I know had to come by default.
[131036960190] |I still believe that this method should work if one constructs the right list of metapackages.
[131036960200] |However, it seems that Ubuntu mirrors contain a "manifest" file that contains all of the packages in the default install.
[131036960210] |The manifest for my system is here:
[131036960220] |http://mirror.pnl.gov/releases/maverick/ubuntu-10.10-desktop-amd64.manifest
[131036960230] |If you search through this page(or the page of a mirror closer to you):
[131036960240] |http://mirror.pnl.gov/releases/maverick/
[131036960250] |You should be able to find the ".manifest" file that corresponds to the version and architecture you are using.
[131036960260] |To extract just the package names I did this:
[131036960270] |The list was likely already sorted and unique, but I wanted to be sure it was properly sorted to make the next step easier.
[131036960280] |I then put the output in defaultinstalled.txt
.
[131036960290] |How can I get the difference between these two lists?
[131036960300] |This is the easiest part since most Unix-like systems have many tools to do this.
[131036960310] |The comm
tool is one of many ways to do this:
[131036960320] |This should print the list of lines that are unique to the first file.
[131036960330] |Thus, it should print a list of installed packages not in the default install.
[131036970010] |Here's a sample output of cat /var/log/apt/history.log
:
[131036970020] |As for your question, filter the stuff with grep (cat /var/log/apt/history.log | grep Commandline
).
[131036970030] |Note that these files are rotated, so check for others (ls /var/log/apt/history.log*
) so you don't miss anything:
[131036970040] |update:
[131036970050] |I've checked both aptitude and synaptic (versions 0.70 and 0.6.3 respectively), and they both log they activities there.
[131036970060] |The one setback with them is that they don't have the line starting with Commandline
, so the grep
trick won't work with them.
[131036970070] |Time to file some bugs :)
[131036980010] |Using dpkg --get-selections
will list the packages explicitly requested for installation.
[131036990010] |Applying a regex to stdin
[131036990020] |In programing we often see the use of Regular Expressions.
[131036990030] |One of the most common forms is:
[131036990040] |If stdin is text
and stdout is newText
, what would the bash equivalent be to the above code?
[131037000010] |man sed
[131037000020] |sed s/regex/replacementString/g
[131037010010] |For simple uses you can do it like this:
[131037010020] |as it is described here, but for more complex expressions sed is the way to go as alex described earlier.
[131037020010] |The most straightforward answer is sed's s
command.
[131037020020] |You need to convert the regexp syntax to Basic regular expressions, and the substitution will be applied successively on each line.
[131037020030] |You can use \1
through \9
to refer to parenthesized groups in the original string.
[131037020040] |Add the g
modifier to replace all occurrences; otherwise only the first occurrence is replaced.
[131037020050] |A more flexible utility is awk.
[131037020060] |It too processes its input line by line by default, but you can change the record separator with -vRS=…
(standard awk wants a single character, or an empty value to mean two or more newlines; Gawk (the GNU version) accepts a regular expression).
[131037020070] |The sub
function performs a single replacement, and gsub
replaces all occurences.
[131037020080] |The replacement string is interpreted literally except for \
and &
; if you want to refer to parenthesized groups, you can use the match
and subtring
functions.
[131037020090] |Bash has built-in support for regexp matching: [[ text =~ regexp ]]
.
[131037020100] |You can construct a replacement text using matched substrings stored in the BASH_REMATCH
array.
[131037020110] |Use read
or cat
to obtain the input and printf
to emit the output.
[131037020120] |The following pseudo-code performs multiple replacements (warning, untested; the code is supposed to perform multiple replacement from left to right as is usual, I hope I got it right).
[131037020130] |(A few words of explanation: the end marker is to avoid trailing newlines being stripped by the command substitution. ${text%"$BASH_REMATCH[0]"}
extracts the part of the text that came before the match.
[131037020140] |Note that we can't use ^(.*)
at the beginning of the regexp or we'd get the last match instead of the first.
[131037020150] |After the match, we iterate over the suffix.
[131037020160] |Finally we print the unmatched leftover, minus the end marker.)
[131037020170] |If you're satisfied with wildcard matching and limited replacement text abilities, bash also has ${variable/pattern/replacement}
.
[131037020180] |Double the first slash to replace all occurrences.
[131037020190] |Patterns do have the power of regular expressions (but with an unusual syntax) if the extglob
option is set.
[131037030010] |you can use tools like sed
and awk
, but in my opinion they are very antiquated and only useful for narrowly defined tasks.
[131037030020] |a better bet is to redirect STDIN to a perl one-liner or script. perl's regex support is so good that most other languages now support some compatibility with them. there are even a2p
and s2p
tools to transform sed and awk directly into perl. using perl allows you to use the entirety of CPAN to help you solve your problems.
[131037030030] |and if you don't like perl, you can use python in a similar capacity.
[131037040010] |Are RPMs valid across platforms?
[131037040020] |I'm a little confused about RPM's in Red Hat and/or Fedora (and/or other distros?).
[131037040030] |I can certainly accept that 64-bit RPM's are needed for 64-bit OS'es and 32-bit for 32-bit OS'es but...
[131037040040] |If I have an RPM for ...
[131037040050] |OpenOffice.org, is that RPM valid for any of my RPM-accepting OS'es or do I need to seek out an RPM specifically tailored to the OS that I'm working with?
[131037050010] |As usual: The answer depends.
[131037050020] |RPMs (or basically any given binary package container) contain runnable code.
[131037050030] |Most of the time that code depends on certain libraries or programs, the package specifies that it does for example depend on library libA
in a version >= 1.0.
[131037050040] |Now take two different distributions with both using the RPM packaging format. let's say one calls the package libA-1.0 so the RPM you have specifies that it depends on libA.
[131037050050] |The second binary distribution has a different naimg scheme and prefixes the package with a language
so it's named language-libA
.
[131037050060] |Even if the contents of both these libA packages are identical, the package manager cannot know that.
[131037050070] |You could of course force RPM to just install the package anyways without looking at the dependencies but that's usually just asking for punishment.
[131037050080] |The problem is less bad if both distributions are related or even based upon one another: Ubuntu for example is based on debian and therefore shares many of the naming conventions and packages so you can othen transfer a package build for debian to a Ubuntu box.
[131037050090] |It also depends a lot on what language the package is written in: If you have something interpreted like Python where the package is basically just a bunch of text files taking a package for a different distribution usually is easy to handle, but if it's written in C++ and depends and both distributions use different versions of core libraries or compilers you're basically out of luck.
[131037060010] |in addition tante's answer.
[131037060020] |It basically depends on whether or not the binary contents of what is essentially a zip file with metadata, are linked properly, if they aren't linked properly they will have to be relinked or otherwise won't work.
[131037060030] |Some RPMS may only contain things like perl modules which don't require linking, so as long as they are in the right spot they'll work.
[131037060040] |However, IIRC, there is more than one RPM format, I believe OpenSuse's RPM format differ's slightly from Red Hats, and thus wouldn't work on Fedora.
[131037060050] |Obviously though a Fedora package is unlikely to work on Red Hat because the library versions are so different, and the linking would be off.
[131037060060] |In short, no, it won't work, don't bother trying.
[131037060070] |Only Generic RPM's like those provided for flash, oracle (you'll note oracle requires relinking), avg, etc might work elsewhere.
[131037070010] |DSL connection not working in Ubuntu 10.04
[131037070020] |I use a wired PPPoE connection to connect to the Internet.
[131037070030] |What I need to do on Windows to connect to it is put in static IP address, gateway, subnet mask and DNS servers for my LAN card.
[131037070040] |Next I have to create a dialer for a PPPoE connection, put in my user name, the service name and the password, and "dial" this connection.
[131037070050] |And it works fine.
[131037070060] |On Ubuntu 10.04, however, I have tried setting things up in a similar fashion - put in all static addresses for the "automatic" wired connection, then put in user name, service name, password for a "DSL" connection.
[131037070070] |It worked for a while, then stopped.
[131037070080] |I have tried putting in all the details within the DSL configuration dialog, same thing happened - it worked for a while, then stopped.
[131037070090] |I have tried deleting the ethernet connection and only keeping the DSL one with all the numbers put in place, same thing happened - it worked for a while, then stopped.
[131037070100] |Each of the times, when it connected, it connected randomly, after trying a few times, and either stopped working within a few minutes, or after I had rebooted.
[131037070110] |I have deleted and remade the connection dozens of times - even with different names, but nothing seems to be working.
[131037070120] |I have also tried pppoeconf
from the terminal, didn't work.
[131037070130] |Based on Gilles' suggestion, I have checked /var/log/kern.log
, but nothing changes in the file when I try to connect.
[131037070140] |I have also checked /sbin/route
, but gedit can't even open it (says it can't figure the character encoding...).
[131037070150] |The "connection established" notification pops up from the top right corner, the same way as when the computer is actually connected to a network.
[131037070160] |Can anyone figure what's wrong and how it can be solved?
[131037080010] |I previously had problems connecting to certain routers via WPA2 with NetworkManager (which was the default in Ubuntu last time I used it).
[131037080020] |I solved it by installing wicd, which worked.
[131037080030] |Note: This doesn't apply if you aren't using a wireless router, or if you don't use WPA2.
[131037090010] |I seem to have found the solution.
[131037090020] |I deleted all the previous connections, deleted the configuration file from /etc/network
created by pppoeconf
and rebooted.
[131037090030] |Then I set up the wired connection (Automatic ethernet) using the static addresses (update: dynamic works now as well) but made sure it didn't have "connect automatically" checked in the configuration dialog.
[131037090040] |Then I created a DSL connection, but of all the settings, I only filled in the user name, service name and password for it.
[131037090050] |I checked "available to all users" and closed the settings dialog.
[131037090060] |Then from the "connections" applet on the panel, clicked the connection name.
[131037090070] |And it connected and worked.
[131037090080] |Nevertheless, after a couple of reboots, I'm noticing that sometimes it won't connect on the first "click".
[131037090090] |It'll show notification "Connection established" but I won't have internet access.
[131037090100] |So I need to disconnect and retry a few times, and eventually it works.
[131037090110] |Update: I forgot to mention - I had to set the MTU to 1452 as well.
[131037100010] |root user denied access to .gvfs in rnsapshot?
[131037100020] |I was running rsnapshot
as root and I got the following error.
[131037100030] |Why would this happen? what is .gvfs
?
[131037110010] |.gvfs
directories are mount points (sometimes).
[131037110020] |You may want to use the one_fs
option in your rsnapshot configuration (so that it passes --one-file-system
to rsync).
[131037110030] |Gvfs is a library-level filesystem implementation, implemented in libraries written by the Gnome project (in particular libgvfscommon
).
[131037110040] |Applications linked with this library can use a filesystem API to access ftp, sftp, webdav, samba, etc.
[131037110050] |Gvfs is like FUSE in that it allows filesystems to be implemented in userland code.
[131037110060] |FUSE requires the one-time cooperation of the kernel (so it's only available on supported versions of supported OSes), but then can be used by any application since it plugs into the normal filesystem API.
[131037110070] |Gvfs can only be used through Gnome libraries, but doesn't need any special collaboration from the kernel so works on more operating systems.
[131037110080] |A quick experiment on Ubuntu 10.04 shows that while an application is accessing a Gvfs filesystem, ~/.gvfs
is a mount point for a gvfs-fuse-daemon
filesystem.
[131037110090] |This filesystem allows any application to access Gvfs filesystems, without needing to link to Gnome libraries.
[131037110100] |It is a FUSE filesystem whose implementation redirects the ordinary filesystem calls to Gvfs calls.
[131037110110] |The gvfs-fuse-daemon
filesystem does not allow any access to the root user, only to the user running the application (it's up to each individual filesystem to manage the root user's permissions; a classic case where root doesn't have every power is NFS, where accesses from root are typically mapped to nobody).
[131037120010] |Make failing while trying to install SoX (spectrogram - libpng)
[131037120020] |I'm trying to install SoX, and get through configure fine, but it fails on make with the following:
[131037120030] |I'm guessing this is more of a linker/pointer error than anything else because this is the beginning of spectrogram.c
[131037120040] |png.h is included, which is where PNG_TRANSFORM_IDENTITY is supposed to come from.
[131037120050] |Am I missing something?
[131037130010] |If you're unable to get lib-png to work, and don't really want it anyway, you can specify it to try and install without png support (so no spectrograph) by doing:
[131037130020] |That solved the problem for me.
[131037130030] |Leaving this here in case someone needs it!
[131037140010] |upgrade xdebug without using apt?
[131037140020] |I'm running debian 5.0.6 and have installed xdebug 2.0.3 via aptitude.
[131037140030] |Now I want to install PHPUnit, and when trying to do it through pear, I get the error that PHPUnit wants a more recent xdebug
[131037140040] |Looking at the debian site, I don't see a package for 2.0.5.
[131037140050] |Is it safe to try upgrade xdebug outside of the apt system?
[131037140060] |I'm not entirely clear on how these things ( php, apache2, and xdebug ) are interrelated, and I don't want to break things and have downtime.
[131037140070] |My rough guess would be that I would build xdebug 2.0.5 and then restart Apache.
[131037140080] |If I go it alone, is aptitude going to trip up on that later, when an xdebug package comes out?
[131037150010] |You are right, you can just install the php-dev package (to be able to compile against the existing PHP) and build an updated xdebug version, overwriting the debian version.
[131037150020] |The problem would obviously be that as soon as debian bumps their package (maybe just for a small patch) your changes would get overwritten.
[131037150030] |You could build your own package but that tends to be somewhat of a pain in the ass.
[131037150040] |But from a downtime/problem point of view you should be OK (the worst that could happen would be that you'd maybe have to adjust the path of the xdebug extension in the php.ini file).
[131037160010] |There is a recent enough version of xdebug in squeeze (the next release of Debian, which will be ready any month now).
[131037160020] |It doesn't have an official backport to stable (otherwise the backport would be listed on the xdebug package search page).
[131037160030] |The binary package depends on a recent version of PHP, but you should be able to compile the source package on lenny, since its build dependencies are satisfiable on lenny.
[131037160040] |Here's a recipe for building the package:
[131037160050] |Download the three files (.dsc
, .orig.tar.gz
, and .debian.tar.gz
).
[131037160060] |Since this is a punctual need, just do it manually.
[131037160070] |Install the build dependencies (here debhelper
and php5-dev
) with apt-get
or aptitude
.
[131037160080] |Also install the basic set of development packages; the build-essential
package will pull them all.
[131037160090] |Also install fakeroot
.
[131037160100] |Unpack the source: dpkg-source -x xdebug_2.1.0-1.dsc
and change to the source directory: cd xdebug-2.1.0
.
[131037160110] |(You can skip this step if you don't make any change in the source package.)
[131037160120] |Edit the debian/changelog
file to add a new changelog entry.
[131037160130] |This is easily done in Emacs:
[131037160140] |make sure the dpkg-dev-el
package is installed;
[131037160150] |open debian/changelog
in Emacs;
[131037160160] |use C-c C-a
to add an entry;
[131037160170] |choose a new version number (here 2.1.0~user394+1
would be a reasonable choice, following the pattern used by the official backports);
[131037160180] |write a log entry (e.g., backport to lenny
, describe the changes you made);
[131037160190] |use C-c C-c
to finalize the entry.
[131037160200] |Compile the package: dpkg-buildpackage -rfakeroot -us -uc
If you have a PGP/GPG key, don't pass -us -uc
and enter your passphrase if prompted to cryptographically sign the packages.
[131037160210] |Profit Install the binary package.
[131037160220] |To summarize the steps:
[131037170010] |Why don't Page U/Down, Home/End work in less on Solaris over ssh from Ubuntu?
[131037170020] |I need to work on a Solaris server over ssh from my Ubuntu (Lucid) laptop.
[131037170030] |I got Home/End Insert/Delete Page Up/Down working in csh and bash using bindkey and ~/.inputrc, respectively.
[131037170040] |But I can't figure out how to get them working in less.
[131037170050] |How can I figure out what the problem is, and fix it?
[131037180010] |I found the answer here, in section 4.4. less (1).
[131037180020] |to use it with the movement keys, have this plain ASCII file .lesskey
in your home directory:
[131037180030] |then run the command lesskey
.
[131037180040] |(These are escape sequences for vt100-like terminals.)
[131037180050] |This creates a binary file .less
containing the key bindings.