|
By default, GNU bash assumes that every character is one byte long and one column wide. A patch for bash 2.04, by Marcin 'Qrczak' Kowalczyk and Ricardas Cepas, teaches bash about multibyte characters in UTF-8 encoding. bash-2.04-diff
Double-width characters, combining characters and bidi are not supported by this patch. It seems a complete redesign of the readline redisplay engine is needed.
In some installations, telnet is not 8-bit clean by default. In order to be able to send Unicode keystrokes to the remote host, you need to set telnet into "outbinary" mode. There are two ways to do this:
$ telnet -L <host>
and
$ telnet
telnet> set outbinary
telnet> open <host>
The communications program C-Kermit http://www.columbia.edu/kermit/ckermit.html, (an interactive tool for connection setup, telnet, file transfer, with support for TCP/IP and serial lines), in versions 7.0 or newer, understands the file and transfer encodings UTF-8 and UCS-2, and understands the terminal encoding UTF-8, and converts between these encodings and many others. Documentation of these features can be found in http://www.columbia.edu/kermit/ckermit2.html#x6.6.
Netscape 4.05 or newer can display HTML documents in UTF-8 encoding. All a document needs is the following line between the <head> and </head> tags:
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
Netscape 4.05 or newer can also display HTML and text files in UCS-2 encoding with byte-order mark.
http://www.netscape.com/computing/download/
Mozilla milestone M16 has much better internationalization than Netscape 4. It can display HTML documents in UTF-8 encoding with support for more languages. Alas, there is a cosmetic problem with CJK fonts: some glyphs can be bigger than the line's height, thus overlapping the previous or next line.
Amaya 4.2.1 ( http://www.w3.org/Amaya/, http://www.w3.org/Amaya/User/SourceDist) has now limited handling of UTF-8 encoded HTML pages. It recognizes the encoding, but it displays only ISO-8859-1 and symbol characters; it only ever accesses the fonts
-adobe-times-*-iso8859-1
-adobe-helvetica-*-iso8859-1
-adobe-new century schoolbook-*-iso8859-1
-adobe-courier-*-iso8859-1
-adobe-symbol-*-adobe-fontspecific
Amaya is in fact a HTML editor, not only a browser. Amaya's strengths among the browsers are its speed, given enough memory, and its rendering of mathematical formulas (MathML support).
lynx-2.8 has an options screen (key 'O') which permits to set the display character set. When running in an xterm or Linux console in UTF-8 mode, set this to "UNICODE UTF-8". Note that for this setting to take effect in the current browser session, you have to confirm on the "Accept Changes" field, and for this setting to take effect in future browser sessions, you have to enable the "Save options to disk" field and then confirm it on the "Accept Changes" field.
Now, again, all a document needs is the following line between the <head> and </head> tags:
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
When you are viewing text files in UTF-8 encoding, you also need to pass the command-line option "-assume_local_charset=UTF-8" (affects only file:/... URLs) or "-assume_charset=UTF-8" (affects all URLs). In lynx-2.8.2 you can alternatively, in the options screen (key 'O'), change the assumed document character set to "utf-8".
There is also an option in the options screen, to set the "preferred document character set". But it has no effect, at least with file:/... URLs and with http://... URLs served by apache-1.3.0.
There is a spacing and line-breaking problem, however. (Look at the russian section of x-utf8.html, or at utf-8-demo.txt.)
Also, in lynx-2.8.2, configured with --enable-prettysrc, the nice colour scheme does not work correctly any more when the display character set has been set to "UNICODE UTF-8". This is fixed by a simple patch lynx282.diff.
The Lynx developers say: "For any serious use of UTF-8 screen output with lynx, compiling with slang lib and -DSLANG_MBCS_HACK is still recommended."
Latest stable release: ftp://ftp.gnu.org/pub/gnu/lynx/lynx-2.8.2.tar.gz
General home page: http://lynx.browser.org/
Newer development shapshots: http://lynx.isc.org/current/, ftp://lynx.isc.org/current/
w3m by Akinori Ito http://ei5nazha.yz.yamagata-u.ac.jp/~aito/w3m/eng/ is a text mode browser for HTML pages and plain-text files. Its layout of HTML tables, enumerations etc. is much prettier than lynx' one. w3m can also be used as a high quality HTML to plain text converter.
w3m 0.1.10 has command line options for the three major Japanese encodings, but can also be used for UTF-8 encoded files. Without command line options, you often have to press Ctrl-L to refresh the display, and line breaking in Cyrillic and CJK paragraphs is not good.
To fix this, by Hironori Sakamoto has a patch http://www2u.biglobe.ne.jp/~hsaka/w3m/ which adds UTF-8 as display encoding.
Some test pages for browsers can be found at the pages of Alan Wood http://www.hclrss.demon.co.uk/unicode/#links and James Kass http://home.att.net/~jameskass/.
yudit by Gáspár Sinai http://www.yudit.org/ is a first-class unicode text editor for the X Window System. It supports simultaneous processing of many languages, input methods, conversions for local character standards. It has facilities for entering text in all languages with only an English keyboard, using keyboard configuration maps.
It can be compiled in three versions: Xlib GUI, KDE GUI, or Motif GUI.
Customization is very easy. Typically you will first customize your font. From the font menu I chose "Unicode". Then, since the command "xlsfonts '*-*-iso10646-1'" still showed some ambiguity, I chose a font size of 13 (to match Markus Kuhn's 13-pixel fixed font).
Next, you will customize your input method. The input methods "Straight", "Unicode" and "SGML" are most remarkable. For details about the other built-in input methods, look in /usr/local/share/yudit/data/.
To change the default for the next session, edit your $HOME/.yuditrc file.
The general editor functionality is limited to editing, cut&paste and search&replace. No undo.
This version is less easy to learn, because it comes with a homebrewn GUI and no easily accessible help. But it has an undo functionality and should therefore be more usable than version 1.5.
yudit can display text using a TrueType font; see section "TrueType fonts"
above. The Bitstream Cyberbit gives good results. For yudit to find the
font, symlink it to /usr/local/share/yudit/data/cyberbit.ttf
.
vim (as of version 6.0r) has good support for UTF-8: when started in an UTF-8 locale, it assumes UTF-8 encoding for the console and the text files being edited. It supports double-wide (CJK) characters as well and combining characters and therefore fits perfectly into UTF-8 enabled xterm.
Installation: Download from
http://www.vim.org/.
After unpacking the four parts, call ./configure
with
--with-features=big
--enable-multibyte
arguments
(or edit src/Makefile to include the --with-features=big
and
--enable-multibyte
options). This will turn on the feature
FEAT_MBYTE. Then do "make" and "make install".
vim can be used to edit files in other encodings. For example, to edit
a BIG5 encoded file: :e ++cc=BIG5 filename
. All encoding names
supported by iconv are accepted. Plus: vim automatically distinguishes
UTF-8 and ISO-8859-1 files without needing any command line option.
cooledit by Paul Sheer http://www.cooledit.org/ is a good text editor for the X Window System. Since version 3.15, it has support for Unicode, including Bidi for Hebrew (but not Arabic).
A build error message message about a missing "vga_setpage" function is worked around by adding "-DDO_NOT_USE_VGALIB" to the CFLAGS.
To view UTF-8 files in an UTF-8 locale you have to modify a setting in the "Options -> Switches" panel: Enable the checkbox "Display characters outside locale". I also found it necessary to disable "Spellcheck as you type".
For viewing texts with both European and CJK characters, cooledit needs a font which contains both, for example the GNU unifont (see section "X11 Unicode fonts"): Start once
$ cooledit -fn -gnu-unifont-medium-r-normal--16-160-75-75-c-80-iso10646-1
cooledit will then use this font in all future invocations.
Unfortunately, the only characters that can be entered through the keyboard are ISO-8859-1 characters and, through a cooledit specific compose mechanism, ISO-8859-2 characters. Inputing arbitrary Unicode characters in cooledit is possible, but a bit tedious.
First of all, you should read the section "International Character Set Support" (node "International") in the Emacs manual. In particular, note that you need to start Emacs using the command
$ emacs -fn fontset-standard
so that it will use a font set comprising a lot of international characters.
In the short term, there are two packages for using UTF-8 in Emacs. None of them needs recompiling Emacs.
You can use either of these packages, or both together. The advantages of the emacs-utf "unicode-utf8" encoding are: it loads faster, and it deals better with combining characters (important for Thai). The advantage of the Mule-UCS / oc-unicode "utf-8" encoding is: it can apply to a process buffer (such as M-x shell), not only to loading and saving of files; and it respects the widths of characters better (important for Ethiopian). However, it is less reliable: After heavy editing of a file, I have seen some Unicode characters replaced with U+FFFD after the file was saved. (But maybe that were bugs in Emacs 20.5 and 20.6 which are fixed in Emacs 20.7.)
To install the emacs-utf package, compile the program "utf2mule" and install it somewhere in your $PATH, also install unicode.el, muleuni-1.el, unicode-char.el somewhere. Then add the lines
(setq load-path (cons "/home/user/somewhere/emacs" load-path))
(if (not (string-match "XEmacs" emacs-version))
(progn
(require 'unicode)
;(setq unicode-data-path "..../UnicodeData-3.0.0.txt")
(if (eq window-system 'x)
(progn
(setq fontset12
(create-fontset-from-fontset-spec
"-misc-fixed-medium-r-normal-*-12-*-*-*-*-*-fontset-standard"))
(setq fontset13
(create-fontset-from-fontset-spec
"-misc-fixed-medium-r-normal-*-13-*-*-*-*-*-fontset-standard"))
(setq fontset14
(create-fontset-from-fontset-spec
"-misc-fixed-medium-r-normal-*-14-*-*-*-*-*-fontset-standard"))
(setq fontset15
(create-fontset-from-fontset-spec
"-misc-fixed-medium-r-normal-*-15-*-*-*-*-*-fontset-standard"))
(setq fontset16
(create-fontset-from-fontset-spec
"-misc-fixed-medium-r-normal-*-16-*-*-*-*-*-fontset-standard"))
(setq fontset18
(create-fontset-from-fontset-spec
"-misc-fixed-medium-r-normal-*-18-*-*-*-*-*-fontset-standard"))
; (set-default-font fontset15)
))))
to your $HOME/.emacs file. To activate any of the font sets, use the Mule
menu item "Set Font/FontSet" or Shift-down-mouse-1. The Unicode coverage
may of the font sets at different sizes may depend on the installed fonts;
here are screen shots at various sizes of UTF-8-demo.txt (
12,
13,
14,
15,
16,
18)
and of the Mule script examples (
12,
13,
14,
15,
16,
18).
To designate a font set as the initial font set for the first frame at startup,
uncomment the set-default-font
line in the code snippet above.
To install the oc-unicode package, execute the command
$ emacs -batch -l oc-comp.el
and install the resulting file un-define.elc
, as well as
oc-unicode.el
, oc-charsets.el
, oc-tools.el
,
somewhere. Then add the lines
(setq load-path (cons "/home/user/somewhere/emacs" load-path))
(if (not (string-match "XEmacs" emacs-version))
(progn
(require 'oc-unicode)
;(setq unicode-data-path "..../UnicodeData-3.0.0.txt")
(if (eq window-system 'x)
(progn
(setq fontset12
(oc-create-fontset
"-misc-fixed-medium-r-normal-*-12-*-*-*-*-*-fontset-standard"
"-misc-fixed-medium-r-normal-ja-12-*-iso10646-*"))
(setq fontset13
(oc-create-fontset
"-misc-fixed-medium-r-normal-*-13-*-*-*-*-*-fontset-standard"
"-misc-fixed-medium-r-normal-ja-13-*-iso10646-*"))
(setq fontset14
(oc-create-fontset
"-misc-fixed-medium-r-normal-*-14-*-*-*-*-*-fontset-standard"
"-misc-fixed-medium-r-normal-ja-14-*-iso10646-*"))
(setq fontset15
(oc-create-fontset
"-misc-fixed-medium-r-normal-*-15-*-*-*-*-*-fontset-standard"
"-misc-fixed-medium-r-normal-ja-15-*-iso10646-*"))
(setq fontset16
(oc-create-fontset
"-misc-fixed-medium-r-normal-*-16-*-*-*-*-*-fontset-standard"
"-misc-fixed-medium-r-normal-ja-16-*-iso10646-*"))
(setq fontset18
(oc-create-fontset
"-misc-fixed-medium-r-normal-*-18-*-*-*-*-*-fontset-standard"
"-misc-fixed-medium-r-normal-ja-18-*-iso10646-*"))
; (set-default-font fontset15)
))))
to your $HOME/.emacs file. You can choose your appropriate font set as with
the emacs-utf package.
In order to open an UTF-8 encoded file, you will type
M-x universal-coding-system-argument unicode-utf8 RET
M-x find-file filename RET
or
C-x RET c unicode-utf8 RET
C-x C-f filename RET
(or utf-8 instead of unicode-utf8, if you prefer oc-unicode/Mule-UCS).
In order to start a shell buffer with UTF-8 I/O, you will type
M-x universal-coding-system-argument utf-8 RET
M-x shell RET
(This works with oc-unicode/Mule-UCS only.)
There is a newer version Mule-UCS-0.81. Unfortunately you need to rebuild emacs from source in order to use it.
Note that all this works with Emacs 20 in windowing mode only, not in terminal mode. None of the mentioned packages works in Emacs 21, as of this writing.
Richard Stallman plans to add integrated UTF-8 support to Emacs in the long term, and so does the XEmacs developers group.
(This section is written by Gilbert Baumann.)
Here is how to teach XEmacs (20.4 configured with MULE) the UTF-8 encoding. Unfortunately you need its sources to be able to patch it.
First you need these files provided by Tomohiko Morioka:
http://turnbull.sk.tsukuba.ac.jp/Tools/XEmacs/xemacs-21.0-b55-emc-b55-ucs.diff and http://turnbull.sk.tsukuba.ac.jp/Tools/XEmacs/xemacs-ucs-conv-0.1.tar.gz
The .diff is a diff against the C sources. The tar ball is elisp code, which provides lots of code tables to map to and from Unicode. As the name of the diff file suggests it is against XEmacs-21; I needed to help `patch' a bit. The most notable difference to my XEmacs-20.4 sources is that file-coding.[ch] was called mule-coding.[ch].
For those unfamilar with the XEmacs-MULE stuff (as I am) a quick guide:
What we call an encoding is called by MULE a `coding-system'. The most important commands are:
M-x set-file-coding-system
M-x set-buffer-process-coding-system [comint buffers]
and the variable `file-coding-system-alist', which guides `find-file' to guess the encoding used. After stuff was running, the very first thing I did was this.
This code looks into the special mode line introduced by -*- somewhere in the first 600 bytes of the file about to opened; if now there is a field "Encoding: xyz;" and the xyz encoding ("coding system" in Emacs speak) exists, choose that. So now you could do e.g.
;;; -*- Mode: Lisp; Syntax: Common-Lisp; Package: CLEX; Encoding: utf-8; -*-
and XEmacs goes into utf-8 mode here.
Atfer everything was running I defined \u03BB (greek lambda) as a macro like:
(defmacro \u03BB (x) `(lambda .,x))
With XFree86-4.0.1, xedit is able to edit UTF-8 files if you set the locale accordingly (see above), and add the line "Xedit*international: true" to your $HOME/.Xdefaults file.
As of version 6.1.2, aXe supports only 8-bit locales. If you add the line "Axe*international: true" to your $HOME/.Xdefaults file, it will simply dump core.
As of version 4.30, pine cannot be reasonably used to view or edit UTF-8 files. In UTF-8 enabled xterm, it has severe redraw problems.
mined98 is a small text editor by Michiel Huisjes, Achim Müller and Thomas Wolff. http://www.inf.fu-berlin.de/~wolff/mined98.tar.gz It lets you edit UTF-8 or 8-bit encoded files, in an UTF-8 or 8-bit xterm. It also has powerful capabilities for entering Unicode characters.
mined lets you edit both 8-bit encoded and UTF-8 encoded files. By default
it uses an autodetection heuristic. If you don't want to rely on heuristics,
pass the command-line option -u
when editing an UTF-8 file, or
+u
when editing an 8-bit encoded file. You can change the
interpretation at any time from within the editor: It displays the encoding
("L:h" for 8-bit, "U:h" for UTF-8) in the menu line. Click on the first
of these characters to change it.
mined knows about double-width and combining characters and displays them correctly. It also has a special display mode for combining characters.
mined also has a scrollbar and very nice pull-down menus. Alas, the "Home", "End", "Delete" keys do not work.
qemacs 0.2 is a small text editor by Fabrice Bellard. http://www-stud.enst.fr/~bellard/qemacs/ with Emacs keybindings. It runs in an UTF-8 console or xterm, and can edit both 8-bit encoded and UTF-8 encoded files. It still has a few rough edges, but further development is underway.
MIME: RFC 2279 defines UTF-8 as a MIME charset, which can be transported under the 8bit, quoted-printable and base64 encodings. The older MIME UTF-7 proposal (RFC 2152) is considered to be deprecated and should not be used any further.
Mail clients released after January 1, 1999, should be capable of sending and displaying UTF-8 encoded mails, otherwise they are considered deficient. But these mails have to carry the MIME labels
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Simply piping an UTF-8 file into "mail" without caring about the MIME labels
will not work.
Mail client implementors should take a look at http://www.imc.org/imc-intl/ and http://www.imc.org/mail-i18n.html.
Now about the individual mail clients (or "mail user agents"):
The situation for an unpatched pine version 4.30 is as follows.
Pine does not do character set conversions. But it allows you to view UTF-8 mails in an UTF-8 text window (Linux console or xterm).
Normally, Pine will warn about different character sets each time you view an UTF-8 encoded mail. To get rid of this warning, choose S (setup), then C (config), then change the value of "character-set" to UTF-8. This option will not do anything, except to reduce the warnings, as Pine has no built-in knowledge of UTF-8.
Also note that Pine's notion of Unicode characters is pretty limited: It will display Latin and Greek characters, but not other kinds of Unicode characters.
A patch by Robert Brady <robert@suse.co.uk> http://www.ents.susu.soton.ac.uk/~robert/pine-utf8-0.1.diff adds UTF-8 support to Pine. With this patch, it decodes and prints headers and bodies properly. The patch depends on the GNOME libunicode http://cvs.gnome.org/lxr/source/libunicode/.
However, alignment remains broken in many places; replying to a mail does not cause the character set to be converted as appropriate; and the editor, pico, cannot deal with multibyte characters.
kmail (as of KDE 1.0) does not support UTF-8 mails at all.
Netscape Communicator's Messenger can send and display mails in UTF-8 encoding, but it needs a little bit of manual user intervention.
To send an UTF-8 encoded mail: After opening the "Compose" window, but before starting to compose the message, select from the menu "View -> Character Set -> Unicode (UTF-8)". Then compose the message and send it.
When you receive an UTF-8 encoded mail, Netscape unfortunately does not display it in UTF-8 right away, and does not even give a visual clue that the mail was encoded in UTF-8. You have to manually select from the menu "View -> Character Set -> Unicode (UTF-8)".
For displaying UTF-8 mails, Netscape uses different fonts. You can adjust your font settings in the "Edit -> Preferences -> Fonts" dialog; choose the "Unicode" font category.
mutt-1.2.x, as available from http://www.mutt.org/, has only rudimentary support for UTF-8: it can convert from UTF-8 into an 8-bit display charset. The mutt-1.3.x development branch also supports UTF-8 as the display charset, so you can run Mutt in an UTF-8 xterm, and has thorough support for MIME and charset conversion (relying on iconv).
exmh 2.1.2 with Tk 8.4a1 can recognize and correctly display UTF-8 mails
(without CJK characters) if you add the following lines to your
$HOME/.Xdefaults
file.
!
! Exmh
!
exmh.mimeUCharsets: utf-8
exmh.mime_utf-8_registry: iso10646
exmh.mime_utf-8_encoding: 1
exmh.mime_utf-8_plain_families: fixed
exmh.mime_utf-8_fixed_families: fixed
exmh.mime_utf-8_proportional_families: fixed
exmh.mime_utf-8_title_families: fixed
groff 1.16.1, the GNU implementation of the traditional Unix text processing
system troff/nroff, can output UTF-8 formatted text. Simply use
`groff -Tutf8
' instead of `groff -Tlatin1
' or
`groff -Tascii
'.
The teTeX 0.9 (and newer) distribution contains an Unicode adaptation of TeX, called Omega ( http://www.gutenberg.eu.org/omega/, ftp://ftp.ens.fr/pub/tex/yannis/omega). Together with the unicode.tex file contained in utf8-tex-0.1.tar.gz it enables you to use UTF-8 encoded sources as input for TeX. A thousand of Unicode characters are currently supported.
All that changes is that you run `omega' (instead of `tex') or `lambda' (instead of `latex'), and insert the following lines at the head of your source input.
\ocp\TexUTF=inutf8
\InputTranslation currentfile \TexUTF
\input unicode
Other maybe related links: http://www.dante.de/projekte/nts/NTS-FAQ.html, ftp://ftp.dante.de/pub/tex/language/chinese/CJK/.
PostgreSQL 6.4 or newer can be built with the configuration option
--with-mb=UNICODE
.
Borland/Inprise's Interbase 6.0 can store string fields in UTF-8 format if the option "CHARACTER SET UNICODE_FSS" is given.
With http://www.flash.net/~marknu/less/less-358.tar.gz you can browse UTF-8 encoded text files in an UTF-8 xterm or console. Make sure that the environment variable LESSCHARSET is not set (or is set to utf-8). If you also have a LESSKEY environment variable set, also make sure that the file it points to does not define LESSCHARSET. If necessary, regenerate this file using the `lesskey' command, or unset the LESSKEY environment variable.
lv-4.49.3 by Tomio Narita http://www.ff.iij4u.or.jp/~nrt/lv/ is a file viewer with builtin character set converters. To view UTF-8 files in an UTF-8 console, use "lv -Au8". But it can also be used to view files in other CJK encodings in an UTF-8 console.
There is a small glitch: lv turns off xterm's cursor and doesn't turn it on again.
Get the GNU textutils-2.0 and apply the patch textutils-2.0.diff, then configure, add "#define HAVE_FGETWC 1", "#define HAVE_FPUTWC 1" to config.h. Then rebuild.
Get the util-linux-2.9y package, configure it, then define ENABLE_WIDECHAR in defines.h, change the "#if 0" to "#if 1" in lib/widechar.h. In text-utils/Makefile, modify CFLAGS and LDFLAGS so that they include the directories where libutf8 is installed. Then rebuild.
figlet 2.2 has an option for UTF-8 input: "figlet -C utf8"
The Li18nux list of commands and utilities that ought to be made interoperable with UTF-8 is as follows. Useful information needs to get added here; I just didn't get around it yet :-)
As of glibc-2.2, regular expressions only work for 8-bit characters. In an UTF-8 locale, regular expressions that contain non-ASCII characters or that expect to match a single multibyte character with "." do not work. This affects all commands and utilities listed below.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
As of at-3.1.8: The two uses of isalnum in at.c are invalid and should be replaced with a use of quotearg.c or an exclude list of the (fixed) list of shell metacharacters. The two uses of %8s in at.c and atd.c are invalid and should become arbitrary length.
No info available yet.
As of sh-utils-2.0i: OK.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
As of fileutils-4.0u: OK.
As of fileutils-4.0u: OK.
As of fileutils-4.0u: OK.
As of sh-utils-2.0i: OK.
As of textutils-2.0e: OK.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
As of fileutils-4.0u: OK.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
As of sh-utils-2.0i: OK.
As of fileutils-4.0u: The conv=lcase, conv=ucase options don't work correctly.
As of fileutils-4.0u: OK.
As of diffutils-2.7.2: the --side-by-side mode therefore doesn't compute column width correctly.
No info available yet.
As of sh-utils-2.0i: OK.
No info available yet.
As of fileutils-4.0u: OK.
As of sh-utils-2.0i: OK.
No info available yet.
No info available yet.
As of sh-utils-2.0i: OK.
No info available yet.
No info available yet.
As of sh-utils-2.0i: The operators "match", "substr", "index", "length" don't work correctly.
As of sh-utils-2.0i: OK.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
As of findutils-4.1.6: The "-iregex" does not work correctly; this needs a fix in function find/parser.c:insert_regex.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
gzip-1.3 is UTF-8 capable, but it uses only English messages in ASCII charset. Proper internationalization would require: Use gettext. Call setlocale. In function check_ofname (file gzip.c), use the function rpmatch from GNU text/sh/fileutils instead of asking for "y" or "n". The use of strlen in gzip.c:852 is wrong, needs to use the function mbswidth.
No info available yet.
No info available yet.
As of sh-utils-2.0i: OK.
No info available yet.
As of sh-utils-2.0i: OK.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
No complete info available yet.
No info available yet.
As of fileutils-4.0u: OK.
As of glibc-2.2: OK.
As of glibc-2.2: OK.
No info available yet.
As of sh-utils-2.0i: OK.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
As of fileutils-4.0y: OK.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
As of fileutils-4.0u: OK.
As of fileutils-4.0u: OK.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
As of fileutils-4.0u: OK.
No info available yet.
No info available yet.
As of sh-utils-2.0i: OK.
No info available yet.
As of sh-utils-2.0i: OK.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
As of sh-utils-2.0i: OK.
No info available yet.
No info available yet.
As of sh-utils-2.0i: OK.
No info available yet.
As of sh-utils-2.0i: OK.
No info available yet.
No info available yet.
No info available yet.
As of fileutils-4.0u: OK.
As of fileutils-4.0u: OK.
No info available yet.
No info available yet.
No info available yet.
As of sh-utils-2.0i: OK.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
As of sh-utils-2.0.11: OK.
No info available yet.
As of textutils-2.0e: OK.
No info available yet.
No info available yet.
As of tar-1.13.17: OK, if user and group names are always ASCII.
No info available yet.
As of sh-utils-2.0i: OK.
No info available yet.
As of sh-utils-2.0i: OK.
No info available yet.
As of fileutils-4.0u: OK.
No info available yet.
No info available yet.
As of sh-utils-2.0i: OK.
No info available yet.
As of sh-utils-2.0i: OK.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
As of sh-utils-2.0i: OK.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
As of textutils-2.0.8: OK.
As of sh-utils-2.0i: OK.
No info available yet.
No info available yet.
As of findutils-4.1.5: The program uses strstr; a patch has been submitted to the maintainer.
No info available yet.
No info available yet.
No info available yet.
Owen Taylor is currently developing a library for rendering multilingual text, called pango. http://www.labs.redhat.com/~otaylor/pango/, http://www.pango.org/.
Hosting by: Hurra Communications Ltd.
Generated: 2007-01-26 17:58:07