One of the best formats for scanned documents is DjVu. An essential feature of the format is the hidden text layer, usually containing the results of Optical Character Recognition. Another important feature is the ability to store (and serve over Internet) the documents as a collection of individual pages. From the very beginning the DjVu format has been used also for dictionaries, in particular there are several Polish dictionaries available in this format. So the question is how to search efficiently the text layer in such large multivolume works. For this purpose he author intends in particular to adapt 'Poliqarp' (Polyinterpretation Indexing Query and Retrieval Procesor), a GPLed corpus query tool developed in the Institute of Computer Science of Polish Academy of Sciences. Some preliminary experiments are described in the talk. In his 'quick and dirty' approach the author treats every page as a single document with the metadata consisting of the name of the document index and the name of the file with the page content. For every word, instead of grammatical tags, he provides its localization on the page in the form of the line number and its position in the line. All the data taken together allow to link the search results to the appropriate fragments of the original scans. The author mentions also another approach to the problem, exemplified by djvu- xfgrep program.