Sometimes when in e.g. a Python or SQL buffer I accidentally evaluate something that outputs e.g. multiple megabytes of text into the buffer.
>>> zps = _read_zipfile_container(br.response().read(), br.geturl())
>>> len(zps)
1
>>> zps[0] # oops
\xfc\xc0H\x19#\x15\x8a\xb4XN\xb5\x9b\xd2\xfeT\x9a\xbeL\xe7a\x90l\xb3Y\xfbL
\xf2Pq\x84\x941R\xa1H\xdb\x82\xe5T\x15\xb7]\xcd\xddk\xd2\xd9\x19$;\tHcW\x8
4\x83\xc1d\x06S\x10\xcc\xb60\xd9T\x89m=\xba\'\x1ay\x0b\x83d\x0bR\xfae\xd1F
H\x19#x15\x8a\xb4-XNUq\xdb]\xdeF\xdf\xde\xc2 \x99\x85A\x1a\x9b!\x1c\x9b!\x
... all one line, but rendered increasingly slowly to many
tens of thousands of continuation lines ...
At the moment I can either kill the process, or else wait for it to all render to the buffer and then delete it manually (might take a half hour, particularly when the output string has no linebreaks). I usually have to choose the latter option to avoid losing work in progress. Is there any way to discard the text without losing the buffer?