[131042330010] |
ps
regularly and analysing later the processes activities.
[131042340020] |However, I can make a guess about the spikes around 6:00am on your graph, it could be the Ubuntu default daily cron jobs.
[131042340030] |On my machine /etc/crontab
, daily.d
is configured to run at 6:25am.
[131042340040] |But you said we should ignore the time, so the obvious question is, do you have cron jobs?
[131042350010] |I'm not entirely sure it's going to be a perfect fit, but the process accounting package (mostly named psacct or acct) is able to keep record of who (which account) runs what (what program).
[131042350020] |This might help you figure out what is running at the time you specified.
[131042360010] |My default answer to such questions would be sar
(System Activity Reporter) from the sysstat package.
[131042360020] |But as far as I know, sar
doesn't collect an equivalent to the output of ps
.
[131042360030] |So perhaps the combination of sar
and elmarcos answer (regularly capture the output of ps
) would help.
[131042360040] |EDIT:
[131042360050] |Steve D has mentioned pidstat
in this question.
[131042360060] |This seems more suited for your needs.
[131042370010] |tty*
files connected to virtual consoles (e.g. tty1 on linux), virtual terminals (e.g. pts/0) or physical connected hardware (e.g. ttyS0 is the physical serial terminal, if any, attached on first serial port of the host).
[131042390060] |A console must be a piece of hardware physically connected to (or part of) the host.
[131042390070] |It has a special role in the system: it is the main point to access a system for maintenance and some special operation can be done only from a console (e.g. see single user mode
).
[131042390080] |A terminal can be, and usually is, a remote piece of hardware.
[131042390090] |Last, but not least, a shell is a special program that interacts with a user through a controlling tty and offers, to the user, the way of launching other programs (e.g. bash, csh, tcsh).
[131042390100] |A terminal emulator is a program that emulates a physical terminal (e.g. xterm, gnome-terminal, minicom).
[131042390110] |So when you look to a "text window" on you linux system (under X11) you are looking to: a terminal emulator, connected to a virtual terminal, identified by a tty file, inside which runs a shell.
[131042400010] |A terminal is at the end of an electric wire, a shell is the home of a turtle, tty is a strange abbreviation and a console is a kind of cabinet.
[131042400020] |Well, etymologically speaking, anyway.
[131042400030] |In unix terminology, the short answer is that
[131042400040] |\e[D
).
[131042400310] |The shell converts control sequences into commands (e.g. \e[D
→ backward-char
).M-x shell
in Emacs.foo
”, “switch the foreground color to green”, “move the cursor to the next line”, etc.
[131042400360] |The terminal acts on these instructions.umask
and supported by the mount
command.
[131042490050] |You can also differentiate between files and directories.
[131042490060] |Here are some lines of man mount
:
[131042490070] |Use this in the option column in your /etc/fstab
file, for example:
[131042500010] |/proc
pseudo-filesystem comes from them; FUSE is a way for applications to easily follow that pattern
[131042510040] |For example, here's a screenshot of a (very featureless) FUSE filesystem that gives access to SE site data:
[131042510050] |Naturally none of those files actually exist; when ls
asked for the list of files in the directory FUSE called a function in my program which did an API request to this site to load information about user 73 (me); cat
trying to read from display_name
and website_url
called more functions that returned the cached data from memory, without anything actually existing on-disk
[131042520010] |FUSE isn't really a file system per se but code that allows file systems to be implemented as processes instead of kernel modules.
[131042520020] |One of the most useful benefit of FUSE is to allow GPL code to "mix" with non GPL one.
[131042520030] |For example, Gnu/Linux and ZFS http://zfs-fuse.net/ or NTFS-3G on many OSes like OpenSolaris and *BSD http://www.tuxera.com/community/ntfs-3g-download/
[131042520040] |The main drawback is the performance impact compared to native (kernel) drivers.
[131042530010] |Unix filesystems are traditionally implemented in the kernel.
[131042530020] |FUSE allows filesystems to be implemented by a user program.
[131042530030] |In-kernel filesystems are better suited for main filesystems for programs and data:
[131042530040] |cert7.db
.
[131042550020] |According to mozilla it's not a fixed format.
[131042550030] |But there are lots of results if you google for cert7.db
or edit cert7.db
or create cert7.db
or convert cert7.db
that should point you in the right direction.
[131042560010] |--enable-gnome-check
and --enable-gtk2-check
, since I'm running gnome on my local machine, but that did not resolve the issue.
[131042560090] |EDIT: Running vim --version
on both versions of vim shows many differences, the most notable being that the machine which has no issue is for the GTK GUI and the machine which does have an issue is for the X11-Motif GUI.
[131042560100] |I can't configure the problem box to use GTK though since I don't have everything I need installed.
[131042560110] |EDIT
[131042560120] |set mouse=a
in your vimrc
file then vim will interpret the selection as its visual mode.
[131042580030] |If this is the case, try holding Shift when selecting.
[131042580040] |Your terminal emulator may have its own mechanism for copying and pasting, for example gnome-terminal uses Ctrl Shift c for copying and Ctrl Shift v for pasting (as Morlock stated in his answer).
[131042580050] |You can use that instead.
[131042590010] |Turns out the problem was that vim on the remote was not compiled with GTK.
[131042590020] |This happened because the necessary package was not present on the remote box.
[131042590030] |Thus, even with the --enable-gtk2-check
compile flag set, it was not actually including GTK.
[131042590040] |To fix it, log on to the remote machine and:
[131042590050] |bash
, but
[131042630030] |Is the default shell related to the SHELL
variable at all?
[131042630040] |Is not, what is the variable used for?
[131042640010] |No, it is not related to default shell.
[131042640020] |The system default shell is defined in /etc/default/useradd
file.
[131042640030] |Your default shell is defined in /etc/passwd
file.
[131042640040] |You can change it by chsh
command.
[131042640050] |The $SHELL
variables usually stores the current shell executable path.
[131042640060] |Each shell behaves differently on this point.
[131042640070] |E.g. bash sets the SHELL variable if it is unset when it starts, otherwise it leaves it unchanged. tcsh does not support this variable at all.
[131042650010] |The variable is for your information ("Hey, what shell am I running under?") rather than the way you set the shell.
[131042650020] |Since Unix environment variables can only propagate down to child processes and not back up to parents, generally environment variables like this are descriptive rather than configuration options.
[131042650030] |To see your default shell, look at your entry in /etc/passwd
, and to change that, run chsh
.
[131042650040] |(This is assuming you're not using NIS or LDAP for this information; in that case it's unlikely to work in most real-world setups.)
[131042650050] |And as andcoz notes, the initial defaults for new users added with the standard useradd
program are in /etc/default/useradd
.
[131042660010] |That's not 100% true.
[131042660020] |For example:
[131042660030] |$SHELL contains the parent shell for your session, which is commonly your login shell as dictated by your user entry in /etc/passwd.
[131042660040] |More clearly, $SHELL is the parent shell from which your current session spawned.
[131042660050] |In my example the current shell, Korn, is technically running within BASH, which is why $SHELL was unmodified.
[131042660060] |Obviously this is an almost exclusively semantic distinction, however, do not fall into the trap of believing that what you see is always what you get.
[131042670010] |I always understood environmental variables to be advisory to programs you are running. $SHELL would be the shell you want a program to start when it needs to run a shell.
[131042670020] |Compare with $EDITOR, you email program might use it to decide which editor to offer you.
[131042670030] |B/c these environmental variables are so easy to change you can't really rely on them as the final word on what the world is really like.
[131042670040] |Related to your question on default shell: chsh is a command that allows you to change your login shell.
[131042670050] |Before changing it shows you what your current choice is.
[131042680010] |The “default shell“ for a unix system administrator is what is stored in the “shell” column of the user database.
[131042680020] |This is the program that is invoked when you log in in text mode (on a text mode console, or over the network via e.g. ssh).
[131042680030] |The ”default shell” for a unix application is either sh
or $SHELL
.
[131042680040] |There is some variation as to whether $SHELL
is intended to be an interactive shell or a shell to run scripts.
[131042680050] |The POSIX specification is ambiguous in that regard, reflecting diverging practice:
[131042680060] |This variable shall represent a pathname of the user's preferred command language interpreter.
[131042680070] |If this interpreter does not conform to the Shell Command Language in the Shell and Utilities volume of IEEE Std 1003.1-2001, Chapter 2, Shell Command Language, utilities may behave differently from those described in IEEE Std 1003.1-2001.
[131042680080] |That bit about the preferred command language means that some applications may try to run commands in $SHELL
, using POSIX shell command syntax.
[131042680090] |However, the normal way to run a shell command in a unix application is through functions such as system
, which is supposed to find a suitable shell regardless of the value of the $SHELL
environment variable.
[131042680100] |Bash and ksh are examples of POSIX-compliant shells.
[131042680110] |Zsh comes close.
[131042680120] |Csh is different, and you may occasionally run into trouble due to SHELL
being set to csh (it's not very common as most applications do call sh
and csh is compatible enough for basic use).
[131042680130] |In practice, for a unix user, $SHELL
is your preferred interactive shell, i.e., what you want to see when you start a terminal emulator.
[131042680140] |It doesn't have to be the same as your login shell¹: you can set it in your .profile
or .login
.
[131042680150] |¹ For example, I typically leave whatever login shell is the default, maintain a Bourne-compatible .profile
, but set SHELL
to zsh if it's available.
[131042690010] |.tar
, .tar.gz
or .tgz
.
[131042720030] |Is there any special reason for that or is that just convention?
[131042730010] |They may not need an extension, but it sure makes identifying them easier in the output of ls
.
[131042740010] |File extensions are primarily a convention for the humans who use the system.
[131042740020] |There are tools which do use the filename extension to do things.
[131042740030] |For example Nautilus shows me a different icon based on the file extension.
[131042740040] |If I gave you a file called file
, you might not know how to open this file.
[131042740050] |However, if I gave you a file named file.tar.gz
or file.tar
you could quickly and easily figure it out.
[131042750010] |Originally, on unix systems, the extensions on file names were a matter of convention.
[131042750020] |They allowed a human being to choose the right program to open a file.
[131042750030] |The modern convention is to use extensions in most cases; common exceptions are:
[131042750040] |README
, TODO
.
[131042750090] |Sometimes there is an additional part that indicate a subcategory, e.g. INSTALL.linux
, INSTALL.solaris
..bashrc
, .profile
, .emacs
.Makefile
.file
command looks at this information and shows you its guesses.
[131042750150] |Sometimes the file extension gives more information than the file format, sometimes it's the other way round.
[131042750160] |For example many file formats consist of a zip archive: Java libraries (.jar
), OpenOffice documents (.odt
, …), Microsoft Office document (.docx
, …), etc.
[131042750170] |Another example is source code files, where the extension indicates the programming language, which can be difficult for a computer to guess automatically from the file contents.
[131042750180] |Conversely, some extensions are wildly ambiguous, for example .o
is used for compiled code files (object files), but inspection of the file contents usually easily reveals what machine type and operating system the object file is for.
[131042750190] |An advantage of the extension is that it's a lot faster to recognize it than to open the file and look for magic sequences.
[131042750200] |For example completion of file names in shells is almost always based on the name (mainly the extension), because reading every file in a large directory can take a long time whereas just reading the file names is fast enough for a Tab press.
[131042750210] |Sometimes changing a file's extension can allow you to say how a file is to be interpreted, when two file formats are almost, but not wholly identical.
[131042750220] |For example a web server might treat .shtml
and .html
differently, the former undergoing some server-side preprocessing, the latter being served as-is.
[131042750230] |In the case of gzip archives, gzip
won't recompress files whose name ends in .gz
, .tgz
and a few other extensions.
[131042750240] |That way you can run gzip *
to compress every file in a directory, and already compressed files are not modified.
[131042760010] |find . -regex '.*\..\{5,\}'
[131042790070] |would work, but it doesn't give any hits, so I somehow don't get my regular expression right.
[131042800010] |The problem is that the default type of regex's used by find is emacs-style.
[131042800020] |Find's documentation for emacs style regexes doesn't include the \{n,\}
construction--leading me to believe that it isn't supported in find's implemenation of emacs-style regular expressions.
[131042800030] |The emacs-wiki lists this as valid, however it is possible that this wasn't always the case.
[131042800040] |I found that your regex produced output if you do this:
[131042810010] |Braces are not always included in basic regexp implementations.
[131042810020] |I don't know if you have GNU find or Busybox find; Busybox may not have all the features of GNU find.
[131042810030] |A portable way to search for files with a long extension is find . -name '*.?????*'
.
[131042810040] |Note that your regexp would match most files under a directory whose name contains a dot.
[131042820010] |~/.config/tomboy/addins
).
[131042840090] |The add-in seems to support most popular blogging platforms.
[131042840100] |From the site:
[131042840110] |Tomboy Blogposter is a Tomboy plugin to post notes to a blog from for instance Wordpress, Blogger or LiveJournal, or (hopefully) any other AtomPub enabled website.
[131042850010] |I really like Wordpress' post editor.
[131042850020] |If you are not using Wordpress you can always create a new post in Wordpress, make all kinds of editing you need, then copy and use the HTML without publishing the post.
[131042850030] |For this you can go with a free wordpress.com account :D
[131042850040] |It also looks like the only thing you need is a WYSIWYG HTML editor, a little googling might be good.
[131042850050] |Try this online html editor, for example.
[131042860010] |BloGTK is a little old but I like its simplicity
[131042870010] |Jaws is a Framework and Content Management System for building dynamic web sites.
[131042870020] |It aims to be User Friendly giving ease of use and lots of ways to customize web sites, but at the same time is Developer Friendly, it offers a simple and powerful framework to hack your own modules.
[131042870030] |Jaws Project Site
[131042880010] |find-file-other-window
, which doesn't allow me to specify which window to open the buffer in.
[131042910040] |Perhaps I'm trying to use emacs in a way it wasn't intended to be used, but what I am trying to achieve is a workspace similar to an IDE:
[131042910050] |For example, I would like to have all internal help and python docs open in the right window, and all .py files in the main window by default.
[131042910060] |Any ideas?
[131042910070] |EDIT: I should mention that I'm using emacs in no-window console mode.
[131042920010] |After extensive googling and head-scratching, I have discovered that what I was looking for is a function called set-window-dedicated-p
.
[131042920020] |Documentation can be found here, for anyone else who happens upon this page:
[131042920030] |http://www.gnu.org/s/emacs/manual/html_node/elisp/Dedicated-Windows.html
[131042920040] |Binding a key to this function will cause the selected window to refuse splitting or displaying other buffers.
[131042930010] |find -maxdepth 1 -type d | while read -r dir; do printf "%s:\t" "$dir"; find "$dir" | wc -l; done
( from here ). which has an output of basically
[131042930030] |and sort it by the numbers largest to smallest. but I'm not sure how to make sort
, or whatever operate on a different column.
[131042940010] |One option is to flip the columns:
[131042940020] |Then you get output like this:
[131042940030] |You can pipe that through sort -nr
to sort it the way you want.
[131042940040] |You can even pipe the sorted result through something like awk -F'\t' '{print $2 "\t" $1}'
to flip the columns back if you need them in that order
[131042950010] |Pipe the lines through sort -n -r -k2
.
[131042950020] |Edited to sort from largest to smallest.
[131042960010] |touch
command, but had realised the full extent of Terminal after installing a fun script in Mac and having to chmod 755
the file to make it executable afterwards.
[131042960030] |I'd like to know what is /usr/local/bin though. /usr/, I assume is the user of the computer.
[131042960040] |I'm not sure why /local/ is there though.
[131042960050] |It obviously stands for the local computer, but since it's on the computer (Or a server), would it really be necessary?
[131042960060] |Wouldn't /usr/bin be fine?
[131042960070] |And what is /bin?
[131042960080] |Why is this area usually used for installing scripts into Terminal?
[131042970010] |/usr/local/bin is for programs that a normal user may run.
[131042970020] |/usr
.
[131043000060] |You can try it on your Mac, if you like: hold Cmd-S while it boots, and you will land in single-user mode.
[131043000070] |It's like running under the Terminal, but it takes over the whole screen because the GUI hasn't started yet, and you're running as root
.
[131043000080] |(Type "exit" at the single-user root prompt to leave single-user mode and continue booting into multi-user mode.)
[131043000090] |Unix systems are organized in this fashion because Unix dates from the days of 5 MB hard disks the size of washing machines.
[131043000100] |It was common for a big Unix system to have multiple physical hard disks, and for /usr
to be off on a separate disk from the system's boot volume.
[131043000110] |If the /usr
volume wouldn't mount for some reason, you could still get a Unix box to boot up into single-user mode to fix it.
[131043000120] |.../local/...obviously stands for the local computer...
[131043000130] |Yes.
[131043000140] |It refers to the fact that files under /usr/local
are supposed to be particular to that single system.
[131043000150] |Files that are in any way generic are supposed to live elsewhere.
[131043000160] |This also has roots in the way Unix systems were commonly used decades ago when all this was standardized.
[131043000170] |In this case, it's again because hard disks were bulky, really expensive, and stored little by today's standards.
[131043000180] |To save money and space on disks, a computer lab full of Unix boxes would often share most of /usr
over NFS or some other network file sharing protocol, so each box didn't have to have its own redundant copy.
[131043000190] |(This is also where we get /usr/share
: it segregates files that could be shared even between Unix boxes with different processor types.
[131043000200] |Typically, text files: man pages, the dictionary, etc.)
[131043000210] |Files specific to a single box would go under /usr/local
, which would be a separate volume from /usr
.
[131043000220] |This historical heritage is why it's still the default for most third-party Unix software to install into /usr/local
when installed by hand.
[131043000230] |Most such software will let you install the package somewhere else, but by making a non-choice, you get the safe default, which doesn't interfere with other common install locations with more specific purposes.
[131043000240] |There are good reasons to make software install somewhere else instead.
[131043000250] |Apple's OS X team does this when they build, say, grep
from the GNU grep source code.
[131043000260] |They use /usr
as the installation prefix, overriding the /usr/local
default.
[131043000270] |Another common prefix is /usr/X11R6
.
[131043000280] |And what is /bin?
[131043000290] |It's short for "binary", a generic term that can refer to many different things, depending on context.
[131043000300] |In the context of Unix directories, it refers to that fact that the files in that directory are compiled executable programs, as opposed to text files, which live elsewhere.
[131043000310] |On a modern system, it's common to find the occasional script file in a bin
directory.
[131043000320] |That bends the original meaning behind the purpose of this directory, since scripts are text files, but it's not a problem in practice.
[131043000330] |The original Unix systems were carefully enough scoped that this didn't happen, at least not with the OS as originally delivered.
[131043000340] |Scripts that came with the OS lived elsewhere, like /etc
.
[131043010010] |/usr/local/bin is the most popular default location executable, especially for open source ones.
[131043010020] |This is however arguably a poor choice as, on Unix systems, /usr has been standardized in the early nineties to contain a hierarchy of files that belong to the operating system and thus can be shared by multiple systems using that OS.
[131043010030] |As these files are static, the /usr file system can be mounted read-only. /usr/local is defeating this standard as it is by design local thus non shared, need to be read-write to allow local compilation and isn't part of the operation system.
[131043010040] |Too bad something like /opt/local wasn't chosen instead ...
[131043020010] |.vimrc
file for only a single ssh session?
[131043020030] |That is, when I log in I perform some operation so that vim uses say /tmp/myvimrc
until I log out?
[131043020040] |I do not want to permanently overwrite the current .vimrc, I just need to use a different set of settings for the duration of my login every once in a while.
[131043030010] |Suppose you have this other set of settings in /tmp/myvimrc
.
[131043030020] |If my reading of man vim
is correct you can start vim with this set of settings using the following:
[131043030030] |Thus, to make this an option for the rest of the session, I would create a function that sets this as an alias for vim
.
[131043030040] |Thus, in bash
I would put something like this in my .bashrc
file:
[131043030050] |Then, when I wanted my new vim settings, I would just run:
[131043030060] |Note that I wouldn't store myvimrc in /tmp
since this could easily be cleared out upon reboot.
[131043030070] |If you are using a shell other than bash
this should still be possible, but the syntax could differ slightly.
[131043040010] |When you log in via ssh, ssh sets variable $SSH_CONNECTION
.
[131043040020] |Your .bashrc
could check for this var and if it is set sets alias that you want:
[131043050010] |You can use the VIMINIT environment variable to override the use of the usual .vimrc
while keeping other parts of the initialization process.
[131043050020] |VIMINIT should be set to one or more ex-style commands (“colon” commands; use pipe (|
) to separate multiple commands), not just the path to a different initialization file.
[131043050030] |The main difference from using -u
is that VIMINIT will still allow the other parts of initialization process to be used (e.g. system vimrc
, evim.vim
(if applicable), et cetera).
[131043060010] |tee
exists for this purpose; it takes a filename argument and writes the data it reads from stdin to both stdout and the file:
[131043080010] |grub.cfg
[131043090040] |I'm unsure how far off the mark I am with this attempt, or even if it's possible:
[131043100010] |ls
takes a lot longer).
[131043120030] |However, what's the limit for how many files you can stick in a folder before it chokes on you?
[131043120040] |Obviously, this may vary from file system to file system, but I would think that they'd at least be similar in their limits.
[131043120050] |Can you stuff files in there until you run out of disk space, or are going to run into some sort of limit like you can only put 65536 (or some other multiple of 2) files in a folder before your file system chokes on you?
[131043120060] |I'm most interested in jfs, since that's what I use primarily use for my large partitions, but it would be good to know for *nix file systems in general.
[131043130010] |In the old UFS, directory size was limited only by your disk space as directories are just files which - like other files - have effectively unbounded length.
[131043130020] |I don't know, but expect that JFS is no different.
[131043130030] |As to how much is too much, it reminds me of the story of the manager who notices that when there are more than 8 users on the machine, performance drops dramatically so he asks the system administrator to find the 8 in the code and change it to 16.
[131043130040] |The point being that there is no 8, it is an emergent property of the system as a whole.
[131043130050] |How to know how big is too big?
[131043130060] |The only practical way is to add entries until it takes longer than you want.
[131043130070] |This is obviously a rather subjective approach but there isn't any other.
[131043130080] |If you are looking to store 65k+ files, there are probably better approaches depending on the nature of your data and how you wish to access it.
[131043140010] |
+
to suspend it.
[131043170040] |Then you can start it back in the forground (using fg
) or in the background (using bg
).
[131043170050] |While the program is suspended or running in the background, you can start another program - you would then have two jobs running.
[131043170060] |You can also start a program running in the background by appending an "&" like this: program &
.
[131043170070] |That program would become a background job.
[131043170080] |To list all the jobs you are running, you can use jobs
.
[131043170090] |For more information on jobs, see http://www.faqs.org/docs/bashman/bashref_77.html .
[131043180010] |UNIX has separate concepts "process", "process group", and "session".
[131043180020] |Each shell you get at login becomes the leader of its own new session and process group, and sets the controlling process group of the terminal to itself.
[131043180030] |The shell creates a process group within the current session for each "job" it launches, and places each process it starts into the appropriate process group.
[131043180040] |For example, ls | head
is a pipeline of two processes, which the shell considers a single job, and will belong to a single, new process group.
[131043180050] |A process is a (collection of) thread of execution and other context, such as address space and file descriptor table.
[131043180060] |A process may start other processes; these new processes will belong to the same process group as the parent unless other action is taken.
[131043180070] |Each process may also have a "controlling terminal", which starts off the same as its parent.
[131043180080] |The shell has the concept of "foreground" jobs and "background" jobs.
[131043180090] |Foreground jobs are process groups with control of the terminal, and background jobs are process groups without control of the terminal.
[131043180100] |Each terminal has a foreground process group.
[131043180110] |When bringing a job to the foreground, the shell sets it as the terminal's foreground process group; when putting a job to the background, the shell sets the terminal's foreground process group to another process group or itself.
[131043180120] |Processes may read from and write to their controlling terminal if they are in the foreground process group.
[131043180130] |Otherwise they receive SIGTTIN
and SIGTTOU
signals on attempts to read from and write to the terminal respectively.
[131043180140] |By default these signals suspend the process, although most shells mask SIGTTOU
so that a background job can write to the terminal uninterrupted.
[131043190010] |grep
, I always want to skip devices and binary files, so I make an alias for grep
.
[131043220040] |For adding new commands such as grepbin
, I use a shell script in my ~/bin
folder.
[131043220050] |If that folder is in your path, it will get autocompleted.
[131043230010] |*.doc
and *.docx
files on Linux.
[131043290030] |What should I use?
[131043290040] |.doc
or .docx
format (which the question kind of suggests).
[131043330310] |But if you're just looking for something for producing academic papers, and you don't need to distribute them specifically in those formats (or receive them in those formats).
[131043330320] |That's why I was asking those questions.
[131043330330] |AbiWord, however, will convert from .doc(x) to LaTeX format, and as mentioned, you can output HTML from TeX source to import into Word or whatever.sh -x /etc/init.d/celeryd start
.
[131043380020] |If it is in the shell script, you should see it this way.
[131043380030] |If daemon
itself is waiting for a newline before returning, your answer will be celeryd specific and might benefit from running it through strace
or simply with a appended so that it doesn't have access to standard input through your terminal.
[131043380040] |Another "grope in the dark" thing to try: run ssh
with the -nt
option to disable terminal allocation and standard input.
[131043390010] |You might actually have a shell prompt and not realize it.
[131043390020] |If you have a program that writes to the terminal while in the background, your prompt gets covered up, but it's still ready for input.
[131043390030] |test.sh:
[131043390040] |interactive shell:
[131043390050] |In this example, the prompt is there.
[131043390060] |You see the "$" right before "This is a test" that is the prompt.
[131043390070] |At the bottom you can see the cursor waiting for input.
[131043390080] |If you type a command here and press enter, it will work as usual.
[131043390090] |Try running ls
after starting your daemon but before pressing
.
[131043400010] |Does the daemon
command need a &
at the end.
[131043400020] |I don't think it does.
[131043400030] |This may be the source of your problem.
[131043410010] |sbopkg is not working with proxy
[131043410020] |Hello..
[131043410030] |Sbopkg used to work on my computer untill the network admin started to make a compulsory for the user to use proxy before connecting to the Internet.
[131043410040] |So, when I tried to sync to slackbuilds, I'm getting these errors:
[131043420010] |The rsync docs suggest:
[131043420020] |You may establish the connection via a web proxy by setting the environment variable RSYNC_PROXY to a hostname:port pair pointing to your web proxy.
[131043420030] |Note that your web proxy's configuration must allow proxying to port 873.
[131043420040] |Have you tried that?
[131043430010] |Tunnel through a NAT
[131043430020] |I have a router that provides internet connection to a single client device via a wireless cellular data network.
[131043430030] |This network provides non-public IP addresses which gets natted.
[131043430040] |I would like to get a static IP address outside of the network that will route everything back to the device on the cell network.
[131043430050] |Because this is an embedded device, space is limited (about 500kB to work with here).
[131043430060] |Because the network is expensive, it has to not consume too much traffic.
[131043430070] |First I tried creating an IPIP tunnel using iproute2.
[131043430080] |From the server, I used the router's egress IP for the remote IP, not the private address the router received.
[131043430090] |I hoped that once the router communicated over the tunnel to the server, the server could communicate back.
[131043430100] |This was not the case.
[131043430110] |I tried dropbear SSH and found it won't do a generic tunnel, but I thought I could probably get around that using iptables.
[131043430120] |However, it seems that just having the ssh link open consumes about 150 bytes/sec.
[131043430130] |I also tried nc, but the communication is only one direction, so I can initiate a connection to the server, but can't get anything back.
[131043430140] |OpenSSH and OpenVPN are too big to fit on the device (both around 1MB).
[131043430150] |My next attempt will probably be to write a program that keeps a persistent socket open to the server, and to use iptables to route the traffic to that program.
[131043430160] |I wanted to see if there were any other ideas first.
[131043430170] |So, any ideas?
[131043440010] |The only NAT that an IPIP tunnel might work with is one-to-one NAT, which is clearly not what you have in the cellular case.
[131043440020] |This "150 bytes/second for an open SSH connection" business is very strange and you should investigate.
[131043440030] |No such thing happens for me with OpenSSH -> OpenSSH sessions (there's the unavoidable keepalives, but you actually WANT those when you're behind a NAT) and there's no reason it should unless you're actually passing traffic.
[131043440040] |You are mistaken about netcat being unidirectional, a TCP session initiated with netcat works both ways.
[131043440050] |I would suggest getting a bidirectional stream up any way you can (probably netcat and a TCP listener on the server) and running PPP over that.
[131043440060] |You get all the usual disadvantages of running IP over TCP, but it's better than not having connectivity at all.
[131043440070] |Here's what works for me in a quick test - on the server:
[131043440080] |server:~$ sudo pppd noauth passive pty "nc -lp 9999" debug nodetach
[131043440090] |On the client:
[131043440100] |client:~$ sudo pppd noauth pty "nc server 9999" debug nodetach
[131043440110] |I think having dialup semantics also provides a useful model for the cases where your cell device will simply not be reachable.
[131043440120] |After you have the IP connection running, you can consider playing IPIP or 1:1 NAT.
[131043450010] |Xorg partially working for a font error
[131043450020] |Hi, in my Debian stable, when I execute:
[131043450030] |startx
[131043450040] |as normal user the Xorg server begin to work, but after some second I recive in console this error:
[131043450050] |FreeFontPath: FPE "/usr/share/fonts/X11/misc" refcount is 2, should be 1; fixing.
[131043450060] |(I cut the stacktrace :))
[131043450070] |When I execute Xorg as root, I haven't got this error!
[131043450080] |I tried to:
[131043450090] |change permission in same file ->useless;
[131043450100] |reinstall XOrg -->useless
[131043450110] |file config not existing? -> what?
[131043450120] |I have see the doc in the Net but I found the root of the problem: for you, what's (better, what can be) the problem?
[131043450130] |EDIT:
[131043450140] |my /var/log/log.0 ---> http://pastebin.ca/1998075
[131043460010] |No one ever found the cause of this error, nor any actual problems caused by it, so it was resolved in later Xorg releases by simply removing the printing of the message.
[131043470010] |What can I do when apt-get/aptitude doesn't create a menu item for an installed program?
[131043470020] |Obviously, this question is specific to Debian-based setups.
[131043470030] |I find that, often, packages I install using aptitude or apt-get only say that a menu item has been created.
[131043470040] |No menu item actually appears.
[131043470050] |Obviously, I can create one myself, but that requires knowing what the binary was actually called (in many cases different from the package name).
[131043470060] |There must be a simple way to know which directories things have been installed in.
[131043470070] |So,
[131043470080] |Does anyone know why the creation of menu entries fails?
[131043470090] |How can I get info about where the binaries reside/what they're called, in order to create my own menu entry?
[131043480010] |You can use:
[131043480020] |...to have a list of files that have been installed by the specific package.
[131043480030] |If you just want executable files:
[131043480040] |Here's a naively, hastily written script that does the above.
[131043480050] |Usage: exec-files-from-package [package]
.
[131043490010] |If you see a message telling you that a menu entry has been created, it means the package has dropped a file into /usr/share/menu
describing one or more menu entry, as per the Debian menu policy.
[131043490020] |The documentation of the menu system (also available in /usr/share/doc/menu
) explains the syntax of this file.
[131043490030] |Each window manager is supposed to include the system menu.
[131043490040] |Gnome doesn't do the standard thing, though (so what else is new).
[131043490050] |Gnome and KDE show a menu constructed from entries in /usr/share/applications/*.desktop
and /usr/share/applnk/**/*.desktop
, following the Freedesktop menu specification.
[131043490060] |Not all packages provide those.
[131043490070] |You can create a .desktop
file based on the Debian entries and put it in ~/.config/menus/
.
[131043500010] |Quick way to find the binaries: dpkg -L $pkg |grep bin/
.
[131043510010] |Dual network connection
[131043510020] |I have a usb cellular modem and a Home LAN connection on my Ubuntu 10.10 box.
[131043510030] |Both work independently.
[131043510040] |I want to know how to have both connected at the same time, and be able to specify which application uses which device to connect to the internet.
[131043510050] |Does anyone know how to do this?
[131043520010] |There are several possibilities, depending on how you want to decide what packets go where.
[131043520020] |Most of them will require some understanding of how TCP/IP networking works in Linux.
[131043520030] |The main tools you'll have to know to do complex things are iptables
(Ubuntu: iptables ) and iproute2 (ip
command) (Ubuntu: iproute , iproute-doc ).
[131043520040] |If you can discriminate fully by target IP address, it's simple: route the IP addresses according to your wishes.
[131043520050] |For example, the following commands will cause all packets for 1.2.3.x and 1.2.4.2 to go via ppp0
, and other packets to go via eth0
.
[131043520060] |For more complex requirements, you need to start using iptables
and ip route
.
[131043520070] |For example, the following commands set up special routing tables so that all packets marked 1 go out via eth0
and all packets marked 2 go out via ppp0
(except that packets intended for localhost
stick to the loopback interface).
[131043520080] |Now you can use iptables
to “mangle” outgoing packets, adding a mark that will decide what route they take.
[131043520090] |For example, here's how to send all outgoing SMTP traffic (port 25) via eth0
, and all traffic originated by an application running as the user proxy
via ppp0
.
[131043520100] |See also 2 network interfaces connected to internet.
[131043520110] |Choose the one to use according to the domain name and bind software to different network interfaces.
[131043520120] |You'll need to arrange for these commands to run when both interfaces are connected.
[131043520130] |I recommend that you write a script called /etc/network/if-up.d/0justin-routes
that runs the commands you want.
[131043520140] |This script will be executed whenever a network interface is brought up; as its name begins with a 0
it will run early in that process, before application-specific setup that might expect the routes to be in place.
[131043520150] |There is a symmetric /etc/network/if-down.d/
in case you also want to do things when one of the interfaces comes down.
[131043520160] |(All associated routes will automatically be erased, which may leave some packets stranded when you'd like them to fall back to the other interface.)
[131043520170] |The ifup scripts are documented in the interfaces(5) man page
.
[131043520180] |The main thing to know is that the name of the interface being brought up or down is in the environement variable IFACE
.
[131043520190] |You can find out whether the other interface is already up with if ifconfig | sed 's/ .*//' | grep -Fqx 'eth0'; then …
.
[131043530010] |apt-get doesn't stop on Ctrl-c, what to do?
[131043530020] |Occasionally I have connection problem with apt-get (typically because I use it behind a proxy and try to install/upgrade flash).
[131043530030] |I have been trying sending Ctrlc but it would not stop.
[131043530040] |I thought it was something wrong with synaptic but obviously I just tried with a terminal and don't know how to stop it now.
[131043540010] |This is a known bug: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=599007 https://bugs.launchpad.net/ubuntu/+source/apt/+bug/578625
[131043540020] |The bug report mentions that even Ctrl + \ doesn't work.
[131043540030] |Solution:
[131043540040] |Suspend the job: Ctrl + Z
[131043540050] |Kill it: sudo killall apt-get
[131043550010] |Good free intro to Ubuntu?
[131043550020] |So a friend asked me if I knew of a good Ubuntu tutorial or book.
[131043550030] |I remembered one from a while ago: the Ubuntu Pocket Guide, but it turns out it's from 2008, so it's not really ideal any more.
[131043550040] |Are there any others out there that are updated for 10.4 or 10.10?
[131043560010] |What about the Ubuntu Manual?
[131043570010] |Official Ubuntu Documentation
[131043580010] |Additionally to the already named Ubuntu-documentation and manual use http://askubuntu.com/.
[131043590010] |Try Ubuntu 10.10 Essentials
[131043600010] |Anacron job complains "Gtk-WARNING **: cannot open display"
[131043600020] |I'm trying to make a simple weekly Anacron job that backs up my computer if I click yes on the dialog.
[131043600030] |The script I wrote works fine if run manually, but when Anacron runs it, nothing happens and I see Gtk-WARNING **: cannot open display
in the logs.
[131043600040] |Apparently the script is run at a stage where graphical operations cannot be run.
[131043600050] |Is there any way to get this dialog to open from an Anacron job?
[131043600060] |Code:
[131043600070] |Error:
[131043610010] |It's probably just running without the $DISPLAY
environment variable.
[131043610020] |If you echo $DISPLAY
in your shell you can see what its value is (most likely :0.0
), and then you can specify that in the crontab file:
[131043620010] |You should use user's crontab instead of system-wide.
[131043620020] |Try crontab -e (opens user's crontab in $EDITOR) or echo 'your crontab line here' | crontab -
[131043630010] |works for me with DISPLAY=:0.0 but I have just one user on the system so I specify that user in /etc/crontab
[131043640010] |Using software OpenGL rendering with X
[131043640020] |I want to try the most basic OpenGL driver, in order to find out what's the problem of my X server with OpenGL.
[131043640030] |I want then to have X use software rendering for OpenGL, like windows do with opengl.dll
with no driver installed.
[131043640040] |How can I do that?
[131043640050] |Didn't find anything when searching for X OpenGL software rendering
.
[131043640060] |I'll be glad for a reference, and for the keywords I had to use in order to find out how to do that.
[131043640070] |I'm using Xorg
in RHEL 5.3.
[131043650010] |I think you're looking for Mesa.
[131043650020] |I'm not sure if RHEL has RPMs for that.
[131043650030] |(Although Mesa is used in some hardware OpenGL drivers for X, it also provides a software-only renderer.)
[131043660010] |Duplicating my answer Force software based opengl rendering - Super User:
[131043660020] |will remove the libgl1-mesa-glx
hardware-accelerated Mesa libraries and install the software-only renderer.
[131043660030] |Alternately, you can set LIBGL_ALWAYS_SOFTWARE=1
, which will only affect programs started with that environment variable, not the entire system.
[131043660040] |Fedora doesn't package the swrast
DRI backend separately from mesa-dri-drivers
(and I assume the same is the case in RHEL), so the first isn't an option, but the latter is.
[131043670010] |Another simpler solution is to add Option NoDRI
, to the Device
section in xorg.conf
.
[131043670020] |For example
[131043670030] |According to this email, it should always work.
[131043670040] |See this bug for more information.
[131043670050] |I didn't find anything about it in Xorg's documentation, so if you find anything about it - do edit it into my answer.
[131043680010] |Why is there a separate package repository for Debian security updates?
[131043680020] |Why don't they upload packages to the normal package repository?
[131043680030] |Is this a general convention (IE, do other distros separate the repositories also)?
[131043690010] |I'm pretty sure Debian puts security updates in the regular repo as well.
[131043690020] |The reason to have a separate repo that only contains security updates is so you can set up a server, only point it at the security repo, and automate updates.
[131043690030] |Now you've got a server that is guaranteed to have the latest security patches without accidentally introducing bugs caused by incompatible versions, etc.
[131043690040] |I'm not sure if this exact mechanism is used by other distros.
[131043690050] |There's a yum
plugin to handle this kind of thing for CentOS, and Gentoo currently has a security mailing list (portage
is currently being modified to support security-only updates).
[131043690060] |FreeBSD and NetBSD both provide ways to do security audits of installed ports/packages, which integrate well with the built-in update mechanisms.
[131043690070] |All told, Debian's approach (and probably Ubuntu's, since they're so closely related) is one of the slicker solutions to this problem.
[131043700010] |It helps with two things:
[131043700020] |safety - first get your security fixes, then you are at lower risk while updating the rest
[131043700030] |security updates should be stored at a high security level, as you tend to rely on them to protect the rest of your system, so it could be that this repo has stronger security controls to prevent compromise
[131043700040] |there could well be other reasons, but those are the two I would find useful
[131043710010] |Debian has a distribution channel that provides security updates only so that administrators can choose to run a stable system with only the absolute minimum of changes.
[131043710020] |Additionally, this distribution channel is kept somewhat separate from the normal channel: all security updates are fed directly from security.debian.org
, whereas it is recommended to use mirrors for everything else.
[131043710030] |This has a number of advantages.
[131043710040] |(I don't remember which of these are official motivations I read on Debian mailing lists and which are my own mini-analysis.
[131043710050] |Some of these are touched on in the Debian security FAQ.)
[131043710060] |Security updates are spread immediately, without the delay incurred by mirror updates (which can add about 1 day of propagation time).
[131043710070] |Mirrors can go stale.
[131043710080] |Direct distribution avoids that problem.
[131043710090] |There is less infrastructure to maintain as a critical service.
[131043710100] |Even if most of Debian's servers are unavailable and people can't install new packages, as long as security.debian.org
points to a working server, security updates can be distributed.
[131043710110] |Mirrors can be compromised (this has happened in the past).
[131043710120] |It's easier to watch a single distribution point.
[131043710130] |If an attacker managed to upload a malicious package somewhere, security.debian.org
could push a package with a more recent version number.
[131043710140] |Depending on the nature of the exploit and the timeliness of the response, this could be enough to keep some machines uninfected or at least warn administrators.
[131043710150] |Fewer people have upload rights on security.debian.org
.
[131043710160] |This limits the possibilities for an attacker trying to subvert an account or machine in order to inject a malicious package.
[131043710170] |Servers that don't need ordinary web access can be kept behind a firewall that only allows security.debian.org
through.
[131043720010] |Can't access select https sites on Linux over PPPoE
[131043720020] |My internet connection used to be a direct LAN connection to my provider.
[131043720030] |Back then, everything would load fine on both Windows and Ubuntu (dual boot).
[131043720040] |However, a while ago they started needing me to dial (PPPoE) using a username and password.
[131043720050] |Gateway, subnet mask, IP, DNS servers all stayed the same.
[131043720060] |But since then, I haven't been able to browse certain websites on Ubuntu, even though there have been no such issues on Windows.
[131043720070] |Some example websites are - Ovi's sign in page (although share.ovi.com loads fine, and nokia.com loads fine), Live Mail (works on Chrome(ium) and Opera but not on Firefox (both 3.6 and 4)) Mozilla Addons website and other random websites.
[131043720080] |Some of the websites that don't load show timeout messages and for some websites (like the moz addons one), the browser will keep trying to load without an end (I've left it like that even for hours but not noticed anything different happen).
[131043720090] |I have tried changing the DNS servers to public ones.
[131043720100] |I have even tried booting from a Fedora LiveCD and then changing the DNS to those (and even to the ones of OpenDNS), but the exact same thing happens.
[131043720110] |What could be inherently wrong with some config within Linux itself that is causing this problem?
[131043720120] |Does anyone know why this is happening and how it can be fixed?
[131043720130] |Note: This question has been cross-posted on SU, but not gotten any responses.
[131043720140] |Update: Just saw here http://ubuntuforums.org/showthread.php?t=1571086&highlight=pppoe that someone else was having similar problem and solved it by putting a NetworkManager.conf file in /etc/NetworkManager.
[131043720150] |What needs to be in that file?
[131043730010] |It appears that the core problem is something to do with SSL.
[131043730020] |All of your problem URLs are https://....
ones.
[131043730030] |I don't see why a change to PPPoE affects this, but perhaps your ISP changed more than one thing at once, and you're blaming the wrong change.
[131043730040] |I would try adding a hardware router, one specifically recommended by model number by your ISP.
[131043730050] |Not only is that likely to negotiate the PPPoE connection exactly as your ISP wants, perhaps it will solve the issue with SSL connections, too.
[131043730060] |If it doesn't help your immediate problem, you do still get a few side benefits from it.
[131043730070] |First, a hardware firewall adds a layer of security.
[131043730080] |If you need to allow connections to the machine behind the firewall, see PortForward.com for guides for port forwarding guides every router you're likely to use.
[131043730090] |Second, most home routers let you share your Internet connection with multiple PCs.
[131043740010] |I had this exact same problem with chromium (and chrome).
[131043740020] |I assumed it was a webkit issue.
[131043740030] |I never found a permanent solution but if you google that error code (without the actual values) you'll see many people have the same issue.
[131043740040] |I could temporarily get it to work by closing the tab that was connected to the particular website and then cleared my cache and cookies and everything.
[131043740050] |I never found a solution and have since gone back to firefox.
[131043750010] |You have the symptoms of an MTU problem: some TCP connections freeze, more or less reproducibly for a given command or URL but with no easily discernible overall pattern.
[131043750020] |A telltale symptom is that interactive ssh sessions work well but file transfers almost always fail.
[131043750030] |Furthermore pppoe is the number one bringer of MTU problem for home users.
[131043750040] |So I prescribe an MTU check.
[131043750050] |What is it?
[131043750060] |The maximum transmission unit is the maximum size of a packet over a network link.
[131043750070] |The MTU varies from transport medium to transport medium, e.g. wired Ethernet and wifi (802.11) have different MTUs, and ATM links (which make up most of the long-distance infrastructure) each have their own MTU.
[131043750080] |PPPOE is an encapsulated protocol, which means that every packet consists of a few bytes of header followed by the underlying packet — so it lowers the maximum packet size by the size of the header.
[131043750090] |IP allows routers to fragment packets if they detect that they're too big for the next hop, but this doesn't always work.
[131043750100] |In theory the proper MTU should be discovered automatically, but this also doesn't always work either.
[131043750110] |In particular googling suggests that Network Manager doesn't always properly act on MTU information obtained from MTU discovery, but I don't know what versions are affected or what the problematic use cases are.
[131043750120] |How to measure it.
[131043750130] |Try sending ping packets of a given size to an outside hosts that responds to them, e.g. ping -c 1 -s 42 8.8.8.8
(on Linux; on other systems, look up the documentation of your ping
command).
[131043750140] |Your packets should get through for small enough values of 42 (if 42 doesn't work, something is blocking pings.).
[131043750150] |For larger values, the packet won't get through.
[131043750160] |1464 is a typical maximum value if the limiting piece of infrastructure is your local Ethernet network.
[131043750170] |If you're lucky, when you send a too large packet, you'll see a message like Frag needed and DF set (mtu = 1492)
.
[131043750180] |If you're not lucky, just keep experimenting with the value until you find what the maximum is, then add 28 (-s
specifies the payload size, and there are 28 bytes of headers in addition to that).
[131043750190] |See also How to Optimize your Internet Connection using MTU and RWIN on the Ubuntu forums.
[131043750200] |How to set it (replace 1454 by the MTU you have determined, and eth0
by the name of your network interface)
[131043750210] |As a once-off (Linux): run ifconfig eth0 mtu 1454
[131043750220] |Permanently (Debian and derivatives such as Ubuntu, if not using Network Manager): Edit /etc/network/interfaces
.
[131043750230] |Just after the entry for your network interface (after the iface eth0 …
directive), add a line with pre-up ifconfig $IFACE mtu 1454
.
[131043750240] |Alternatively, if your IP address is static, you can add the mtu 1454
parameter to the iface eth0 inet static
directive.
[131043750250] |Permanently (Debian and derivatives such as Ubuntu, with or without Network Manager): Create a script called /etc/network/if-pre-up.d/mtu
with the following contents and make it world-executable (chmod a+rx
):
[131043760010] |Strange change directory
[131043760020] |Is it possible to make bash change directory in command line simply by typing that directory without any commands like 'cd'?
[131043760030] |For example is it possible instead of writing this:
[131043760040] |I just want to write only this:
[131043760050] |In other words if I call directory as an application then I want to set that directory as a working one.
[131043770010] |I would imagine so.
[131043770020] |There is some catch-all mechanism that Ubuntu has.
[131043770030] |When you run a command that isn't on the system, it sometimes says :
[131043770040] |No command 'foo' found did you mean:...
[131043770050] |I'd bet you can hijack whatever that is and add a check to see if it's given a directory.
[131043770060] |It's probably worth posting another question asking what does that.
[131043770070] |EDIT: Found it: https://wiki.ubuntu.com/CommandNotFoundMagic
[131043780010] |In bash there is also autocd
option.
[131043780020] |You can enable it by using shopt -s autocd
:
[131043790010] |bash get file name and extension
[131043790020] |Possible Duplicate: bash : get file extension
[131043790030] |I have file names like below
[131043790040] | filename.jar
[131043790050] |filename.tar.gz
[131043790060] |filename with no extension
[131043790070] |org.eclipse.birt.chart.reportitem_2.1.0.N20060628-1351.jar
[131043790080] |OPSWoracle10g_instantclient-10.2.0.3.0-3.sparc64.rpm
[131043790090] |web-app_2_3.dtd
[131043790100] |How can I get the file names and extensions for all these scenarios in a bash script.
[131043790110] |Can you please help me.
[131043790120] |Iuse3d the solution by Gilles @ bash : get file extension