Lines Matching refs:data

43 * base32: (coreutils)base32 invocation.         Base32 encode/decode data.
44 * base64: (coreutils)base64 invocation. Base64 encode/decode data.
46 * basenc: (coreutils)basenc invocation. Encoding/decoding of data.
101 * printf: (coreutils)printf invocation. Format and print data.
233 * Random sources:: Sources of random data
247 * base32 invocation:: Transform data into printable data
248 * base64 invocation:: Transform data into printable data
249 * basenc invocation:: Transform data into printable data
362 * printf invocation:: Format and print data
621 output even when that output would contain data with embedded newlines.
1134 High bandwidth data is available at a socket.
1207 @section Sources of random data
1212 sometimes need random data to do their work. For example, @samp{sort
1213 -R} must choose a hash function at random, and it needs random data to
1222 source of random data. Typically, this device gathers environmental
1224 uses the pool to generate random bits. If the pool is short of data,
1227 that this device is not designed for bulk random data generation
1231 requiring high-value or long-term protection of private data may
1232 require an alternate data source like @file{/dev/random} or
1237 can save some random data into a file and then use that file as the
1241 arbitrary amount of pseudo-random data given a seed value, using
1630 * base32 invocation:: Transform data into printable data.
1631 * base64 invocation:: Transform data into printable data.
1632 * basenc invocation:: Transform data into printable data.
1984 groups of data from the file. By default, @command{od} prints the offset in
1985 octal, and each group of file data is a C @code{short int}'s worth of input
2072 Select the format in which to output the file data. @var{type} is a
2076 of each output line using each of the data types that you specified,
2109 of bytes to use in interpreting each number in the given data type
2112 built-in data types by following the type indicator character with
2230 @section @command{base32}: Transform data into printable data
2235 @command{base32} transforms data read from a file, or standard input,
2237 printable ASCII characters to represent binary data.
2244 @section @command{base64}: Transform data into printable data
2249 @command{base64} transforms data read from a file, or standard input,
2251 printable ASCII characters to represent binary data.
2259 The base64 encoding expands data to roughly 133% of the original.
2260 The base32 encoding expands data to roughly 160% of the original.
2274 @cindex wrap data
2275 @cindex column to wrap data after
2286 @cindex Decode base64 data
2288 Change the mode of operation, from the default of encoding data, to
2289 decoding data. Input is expected to be base64 encoded data, and the
2290 output will be the original data.
2299 to permit distorted data to be decoded.
2306 @section @command{basenc}: Transform data into printable data
2311 @command{basenc} transforms data read from a file, or standard input,
2313 printable ASCII characters to represent binary data.
2350 The encoded data uses the @samp{ABCDEFGHIJKLMNOPQRSTUVWXYZ234567=} characters.
2358 base32 form. The encoded data uses the
2365 form. The encoded data uses the @samp{0123456789ABCDEF} characters. The format
2384 (a modified Ascii85 form). The encoded data uses the
3076 GNU @command{tail} can output any amount of data (some other versions of
4179 Read file names and checksum information (not data) from each
4832 data may be lost if the system crashes or @command{sort} encounters
4848 Use @var{file} as a source of random data used to determine which
5145 In general this technique can be used to sort data that the @command{sort}
5226 Use @var{file} as a source of random data used to determine which
6472 file. Using the above example data:
6486 exhausted, start again at its beginning. Using the above example data:
7138 individual bytes, or where data might contain invalid bytes that are
7985 high performance (``contiguous data'') file
8186 @opindex data modification time@r{, printing or sorting files by}
8189 In long format, print the last data modification timestamp (the mtime).
8972 trying to read the data in each source file and writing it to the
9206 files share the same data blocks as long as they remain unmodified.
9207 Thus, if an I/O error affects data blocks of one of the files,
9229 to configure the default data copying behavior for @command{cp}.
9386 while optionally performing conversions on the data. Synopses:
9410 is given, output the data as a single block and skip all remaining steps.
9414 If the input data length is odd, preserve the last input byte
9425 Aggregate the resulting data into output blocks of the specified size,
9431 whose syntax was inspired by the DD (data definition) statement of
9467 In addition, if no data-transforming @option{conv} operand is specified,
9554 write operation succeeds but transfers less data than the block size.
9628 With @samp{conv=notrunc}, existing data in the output file
9680 @cindex synchronized data writes, before finishing
9681 Synchronize output data just before finishing,
9683 This forces a physical write of output data,
9684 so that even if power is lost the output data will be preserved.
9686 usual with file systems, i.e., output data and metadata may be cached
9688 writes it, and thus output data and metadata may be lost if power is lost.
9694 @cindex synchronized data and metadata writes, before finishing
9695 Synchronize output data and metadata just before finishing,
9732 Use concurrent I/O mode for data. This mode performs direct I/O
9740 Use direct I/O for data, avoiding the buffer cache.
9755 @cindex synchronized data reads
9756 Use synchronized I/O for data. For the output file, this forces a
9757 physical write of output data on each write. For the input file,
9764 @cindex synchronized data and metadata I/O
9765 Use synchronized I/O for both data and metadata.
9770 Request to discard the system data cache for a file.
9771 When count=0 all cached data for the file is specified,
9777 Note data that is not already persisted to storage will not
9796 # Stream data using just the read-ahead cache.
9888 To process data with offset or size that is not a multiple of the I/O
9891 For example, the following shell commands copy data
9909 functionality to ease the saving of as much data as possible before the
9921 # Rescue data from an (unmounted!) partition of a failing device.
9928 @command{dd} is run in the background to copy 5GB of data.
10399 which normally contains no valuable data. However, it is not uncommon
10477 @cindex data, erasing
10478 @cindex erasing data
10481 extensive forensics from recovering the data.
10483 Ordinarily when you remove a file (@pxref{rm invocation}), its data
10487 file. And even if file's data and metadata's storage space is freed
10489 reconstruct the file from the data in freed storage, and that can
10498 data, you may want to be sure that recovery is not possible
10506 with non-sensitive data.
10509 assumption: that the file system and hardware overwrite data in place.
10518 @code{data=journal} mode), Btrfs, NTFS, ReiserFS, XFS, ZFS, file
10520 journal data.
10523 File systems that write redundant data and carry on even if some writes
10538 when the file system is in @code{data=journal}
10539 mode, which journals file data in addition to just metadata. In both
10540 the @code{data=ordered} (default) and @code{data=writeback} modes,
10542 by adding the @code{data=something} option to the mount options for a
10546 shredding enough file data so that the journal cycles around and fills
10547 up with shredded data.
10550 that it does not overwrite data in place, which means @command{shred} cannot
10563 blocks by the hardware, so ``overwritten'' data blocks are still
10568 the application; if the bad blocks contain sensitive data,
10575 to look for the faint ``echoes'' of the original data underneath the
10576 overwritten data. With these older technologies, if the file has been
10578 this kind of data recovery has become difficult, and there is no
10583 with data patterns chosen to
10584 maximize the damage they do to the old data.
10604 to be recovered later. So if you keep any data you may later want
10635 Use @var{file} as a source of random data used to overwrite and to
10657 Often the file name is less sensitive than the file data, in which case
10693 random data. If this would be conspicuous on your storage device (for
10694 example, because it looks like encrypted data), or you just think
10711 Similarly, to erase all data on a selected partition of
10715 # 1 pass, write pseudo-random data; 3x faster than the default
10720 pseudo-random data. I.e., don't be tempted to use @samp{-n0 --zero},
11180 another for reading, after which data can flow as with the usual
11223 receive data. Usually this corresponds to a physical piece of hardware,
12144 No file system can hold an infinite amount of data. These commands report
12256 @cindex file system space, retrieving old data more quickly
12257 Do not invoke the @code{sync} system call before getting any usage data.
12358 @cindex file system space, retrieving current data more slowly
12359 Invoke the @code{sync} system call before getting any usage data. On
12803 that report might change after one merely overwrites existing file data.)
12972 @item %y -- Time of last data modification
12973 @item %Y -- Time of last data modification as seconds since Epoch
13040 @item %b -- Total data blocks in file system
13078 @command{sync} writes any data buffered in memory out to the storage device.
13086 The kernel keeps data in memory to avoid doing (relatively slow) device
13088 crashes, data may be lost or the file system corrupted as a
13090 data in memory to persistent storage.
13101 @itemx --data
13102 @opindex --data
13103 Use fdatasync(2) to sync only the data for the file,
13140 If a @var{file} is larger than the specified size, the extra data is lost.
13200 * printf invocation:: Format and print data.
13309 @section @command{printf}: Format and print data
14236 to send some data down a pipe, but also to save a copy. Synopsis:
14243 file being written to already exists, the data it previously contained
14274 appropriate manner with pipes, and to continue to process data
14307 amount of data and also want to summarize that data without reading
14345 might exit early without consuming all the data, the @option{-p} option
14370 Consider a tool to graphically summarize file system usage data from
14373 and can easily produce terabytes of data, so you won't want to
15249 These settings control operations on data received from the terminal.
15342 These settings control operations on data sent to the terminal.
16971 and/or comparing data by date. The following command outputs the
17003 If you're sorting or graphing dated data, your raw date values may be
17676 Finally, if the executable requires any other files (e.g., data, state,
18579 In this mode data is coalesced until a newline is output or
18585 In this mode, data is output immediately and only the
18586 amount of data requested is read from input.
18591 even if the underlying @code{read} returns less data than requested.
19044 will override the precision determined from the input data or set due to
19638 and ``standard error''. Briefly, ``standard input'' is a data source, where
19639 data comes from. A program should not need to either know or care if the
19640 data source is a regular file, a keyboard, a magnetic tape, or even a punched
19641 card reader. Similarly, ``standard output'' is a data sink, where data goes
19643 Programs that only read their standard input, do something to the data,
19647 With the Unix shell, it's very easy to set up data pipelines:
19650 program_to_create_data | filter1 | ... | filterN > final.pretty.data
19653 We start out by creating the raw data; each filter applies some successive
19654 transformation to the data, until by the time it comes out of the pipeline,
19659 the pipeline above. What happens if it encounters an error in the data it
19667 For filter programs to work together, the format of the data has to be
19669 lines of text. Unix data files are generally just streams of bytes, with
19675 binary data. Unix has always shied away from such things, under the
19677 data with a text editor.)
19705 but the data is not all that exciting.
19711 cuts out columns or fields of input data. For example, we can tell it
19732 (i.e., columns) in the input lines. This is useful for input data
19752 yourself using when setting up fancy data plumbing.
19756 merges the sorted data and writes it to standard output. It will read
19766 sorting data, you will often end up with duplicate lines, lines that
19848 This allows you to view the data at each stage in the pipeline, which helps
19886 command takes two sorted input files as input data, and prints out the
19887 files' lines in three columns. The output columns are the data lines
19888 unique to the first file, the data lines unique to the second file, and
19889 the data lines that are common to both. The @option{-1}, @option{-2}, and
19941 At this point, we have data consisting of words separated by blank space.
19943 next step is break the data apart so that we have one word per line. This
19957 We now have data consisting of one word per line, no punctuation, all one
19965 At this point, the data might look something like this:
20053 a T-fitting for data pipes, copies data to files and to standard output
20059 a data manipulation language, another advanced tool
20079 Programs should never print extraneous header or trailer data, since these