[ACCEPTED]-How can I get a java.io.InputStream from a java.lang.String?-inputstream
Update: This answer is precisely what the OP doesn't 4 want. Please read the other answers.
For 3 those cases when we don't care about the 2 data being re-materialized in memory, please 1 use:
new ByteArrayInputStream(str.getBytes("UTF-8"))
If you don't mind a dependency on the commons-io package, then 1 you could use the IOUtils.toInputStream(String text) method.
There is an adapter from Apache Commons-IO 2 which adapts from Reader to InputStream, which 1 is named ReaderInputStream.
Example code:
@Test
public void testReaderInputStream() throws IOException {
InputStream inputStream = new ReaderInputStream(new StringReader("largeString"), StandardCharsets.UTF_8);
Assert.assertEquals("largeString", IOUtils.toString(inputStream, StandardCharsets.UTF_8));
}
Reference: https://stackoverflow.com/a/27909221/5658642
To my mind, the easiest way to do this is 10 by pushing the data through a Writer:
public class StringEmitter {
public static void main(String[] args) throws IOException {
class DataHandler extends OutputStream {
@Override
public void write(final int b) throws IOException {
write(new byte[] { (byte) b });
}
@Override
public void write(byte[] b) throws IOException {
write(b, 0, b.length);
}
@Override
public void write(byte[] b, int off, int len)
throws IOException {
System.out.println("bytecount=" + len);
}
}
StringBuilder sample = new StringBuilder();
while (sample.length() < 100 * 1000) {
sample.append("sample");
}
Writer writer = new OutputStreamWriter(
new DataHandler(), "UTF-16");
writer.write(sample.toString());
writer.close();
}
}
The 9 JVM implementation I'm using pushed data 8 through in 8K chunks, but you could have 7 some affect on the buffer size by reducing 6 the number of characters written at one 5 time and calling flush.
An alternative to 4 writing your own CharsetEncoder wrapper 3 to use a Writer to encode the data, though 2 it is something of a pain to do right. This 1 should be a reliable (if inefficient) implementation:
/** Inefficient string stream implementation */
public class StringInputStream extends InputStream {
/* # of characters to buffer - must be >=2 to handle surrogate pairs */
private static final int CHAR_CAP = 8;
private final Queue<Byte> buffer = new LinkedList<Byte>();
private final Writer encoder;
private final String data;
private int index;
public StringInputStream(String sequence, Charset charset) {
data = sequence;
encoder = new OutputStreamWriter(
new OutputStreamBuffer(), charset);
}
private int buffer() throws IOException {
if (index >= data.length()) {
return -1;
}
int rlen = index + CHAR_CAP;
if (rlen > data.length()) {
rlen = data.length();
}
for (; index < rlen; index++) {
char ch = data.charAt(index);
encoder.append(ch);
// ensure data enters buffer
encoder.flush();
}
if (index >= data.length()) {
encoder.close();
}
return buffer.size();
}
@Override
public int read() throws IOException {
if (buffer.size() == 0) {
int r = buffer();
if (r == -1) {
return -1;
}
}
return 0xFF & buffer.remove();
}
private class OutputStreamBuffer extends OutputStream {
@Override
public void write(int i) throws IOException {
byte b = (byte) i;
buffer.add(b);
}
}
}
Well, one possible way is to:
- Create a
PipedOutputStream
- Pipe it to a
PipedInputStream
- Wrap an
OutputStreamWriter
around thePipedOutputStream
(you can specify the encoding in the constructor) - Et voilá, anything you write to the
OutputStreamWriter
can be read from thePipedInputStream
!
Of course, this 2 seems like a rather hackish way to do it, but 1 at least it is a way.
A solution is to roll your own, creating 3 an InputStream
implementation that likely would use 2 java.nio.charset.CharsetEncoder
to encode each char
or chunk of char
s to an array 1 of bytes for the InputStream
as necessary.
You can take help of org.hsqldb.lib library.
public StringInputStream(String paramString)
{
this.str = paramString;
this.available = (paramString.length() * 2);
}
0
More Related questions
We use cookies to improve the performance of the site. By staying on our site, you agree to the terms of use of cookies.