|
| 1 | +{ |
| 2 | + "cells": [ |
| 3 | + { |
| 4 | + "cell_type": "markdown", |
| 5 | + "metadata": {}, |
| 6 | + "source": [ |
| 7 | + "# Example of Parsing PDF using LlamaParse\n", |
| 8 | + "Source: https://github.com/run-llama/llama_parse/tree/main" |
| 9 | + ] |
| 10 | + }, |
| 11 | + { |
| 12 | + "cell_type": "markdown", |
| 13 | + "metadata": {}, |
| 14 | + "source": [ |
| 15 | + "### 1. Load the libraries\n", |
| 16 | + "\n", |
| 17 | + "If you have install `llama-parse`, uncomment the below line." |
| 18 | + ] |
| 19 | + }, |
| 20 | + { |
| 21 | + "cell_type": "code", |
| 22 | + "execution_count": 6, |
| 23 | + "metadata": {}, |
| 24 | + "outputs": [], |
| 25 | + "source": [ |
| 26 | + "# !pip3 install llama-parse" |
| 27 | + ] |
| 28 | + }, |
| 29 | + { |
| 30 | + "cell_type": "markdown", |
| 31 | + "metadata": {}, |
| 32 | + "source": [ |
| 33 | + "### 2. Set up your Llama Cloud API key\n", |
| 34 | + "\n", |
| 35 | + "To set up your `LLAMA_CLOUD_API_KEY` API key, you will:\n", |
| 36 | + "\n", |
| 37 | + "1. create a `.env` file in your root folder;\n", |
| 38 | + "2. acquire an api key from https://cloud.llamaindex.ai/\n", |
| 39 | + "2. add the following one line to your `.env file:\n", |
| 40 | + " ```\n", |
| 41 | + " LLAMA_CLOUD_API_KEY=llx-************************\n", |
| 42 | + " ```" |
| 43 | + ] |
| 44 | + }, |
| 45 | + { |
| 46 | + "cell_type": "markdown", |
| 47 | + "metadata": {}, |
| 48 | + "source": [ |
| 49 | + "### 3. Run the parser" |
| 50 | + ] |
| 51 | + }, |
| 52 | + { |
| 53 | + "cell_type": "code", |
| 54 | + "execution_count": 7, |
| 55 | + "metadata": {}, |
| 56 | + "outputs": [], |
| 57 | + "source": [ |
| 58 | + "%reload_ext autoreload\n", |
| 59 | + "%autoreload 2\n", |
| 60 | + "\n", |
| 61 | + "import sys\n", |
| 62 | + "\n", |
| 63 | + "sys.path.append(\".\")\n", |
| 64 | + "sys.path.append(\"..\")\n", |
| 65 | + "sys.path.append(\"../..\")" |
| 66 | + ] |
| 67 | + }, |
| 68 | + { |
| 69 | + "cell_type": "code", |
| 70 | + "execution_count": 8, |
| 71 | + "metadata": {}, |
| 72 | + "outputs": [], |
| 73 | + "source": [ |
| 74 | + "import os\n", |
| 75 | + "from uniflow.flow.client import ExtractClient\n", |
| 76 | + "from uniflow.flow.config import ExtractPDFConfig\n", |
| 77 | + "from uniflow.op.model.model_config import LlamaParseModelConfig\n", |
| 78 | + "from uniflow.op.extract.split.constants import PARAGRAPH_SPLITTER" |
| 79 | + ] |
| 80 | + }, |
| 81 | + { |
| 82 | + "cell_type": "code", |
| 83 | + "execution_count": 9, |
| 84 | + "metadata": {}, |
| 85 | + "outputs": [], |
| 86 | + "source": [ |
| 87 | + "dir_cur = os.getcwd()\n", |
| 88 | + "pdf_file = \"1408.5882_page-1.pdf\"\n", |
| 89 | + "input_file = os.path.join(f\"{dir_cur}/data/raw_input/\", pdf_file)" |
| 90 | + ] |
| 91 | + }, |
| 92 | + { |
| 93 | + "cell_type": "code", |
| 94 | + "execution_count": 10, |
| 95 | + "metadata": {}, |
| 96 | + "outputs": [ |
| 97 | + { |
| 98 | + "name": "stderr", |
| 99 | + "output_type": "stream", |
| 100 | + "text": [ |
| 101 | + " 0%| | 0/1 [00:00<?, ?it/s]" |
| 102 | + ] |
| 103 | + }, |
| 104 | + { |
| 105 | + "name": "stdout", |
| 106 | + "output_type": "stream", |
| 107 | + "text": [ |
| 108 | + "Started parsing the file under job_id 45a4609f-6440-4442-a7d9-8e42804eeaa6\n" |
| 109 | + ] |
| 110 | + }, |
| 111 | + { |
| 112 | + "name": "stderr", |
| 113 | + "output_type": "stream", |
| 114 | + "text": [ |
| 115 | + "100%|██████████| 1/1 [00:03<00:00, 3.25s/it]\n" |
| 116 | + ] |
| 117 | + } |
| 118 | + ], |
| 119 | + "source": [ |
| 120 | + "import nest_asyncio\n", |
| 121 | + "\n", |
| 122 | + "nest_asyncio.apply()\n", |
| 123 | + "\n", |
| 124 | + "data = [\n", |
| 125 | + " {\"filename\": input_file},\n", |
| 126 | + "]\n", |
| 127 | + "\n", |
| 128 | + "config = ExtractPDFConfig(\n", |
| 129 | + " model_config=LlamaParseModelConfig(\n", |
| 130 | + " model_name = \"LlamaIndex/LlamaParse\",\n", |
| 131 | + " api_key = os.getenv(\"LLAMA_CLOUD_API_KEY\"),\n", |
| 132 | + " num_wokers = 4,\n", |
| 133 | + " sync = True,\n", |
| 134 | + " result_type = \"markdown\",\n", |
| 135 | + " language = \"en\",\n", |
| 136 | + " ),\n", |
| 137 | + " splitter=PARAGRAPH_SPLITTER,\n", |
| 138 | + ")\n", |
| 139 | + "llama_client = ExtractClient(config)\n", |
| 140 | + "\n", |
| 141 | + "output = llama_client.run(data)" |
| 142 | + ] |
| 143 | + }, |
| 144 | + { |
| 145 | + "cell_type": "code", |
| 146 | + "execution_count": 11, |
| 147 | + "metadata": {}, |
| 148 | + "outputs": [ |
| 149 | + { |
| 150 | + "data": { |
| 151 | + "text/plain": [ |
| 152 | + "[{'output': [{'text': ['# Convolutional Neural Networks for Sentence Classification',\n", |
| 153 | + " 'Yoon Kim',\n", |
| 154 | + " 'New York University',\n", |
| 155 | + " 'yhk255@nyu.edu',\n", |
| 156 | + " 'Abstract',\n", |
| 157 | + " 'We report on a series of experiments with convolutional neural networks (CNN) trained on top of pre-trained word vectors for sentence-level classification tasks. We show that a simple CNN with little hyperparameter tuning and static vectors achieves excellent results on multiple benchmarks. Learning task-specific vectors through fine-tuning offers further gains in performance. We additionally propose a simple modification to the architecture to allow for the use of both task-specific and static vectors. The CNN models discussed herein improve upon the state of the art on 4 out of 7 tasks, which include sentiment analysis and question classification.',\n", |
| 158 | + " 'Introduction',\n", |
| 159 | + " 'Deep learning models have achieved remarkable results in computer vision (Krizhevsky et al., 2012) and speech recognition (Graves et al., 2013) in recent years. Within natural language processing, much of the work with deep learning methods has involved learning word vector representations through neural language models (Bengio et al., 2003; Yih et al., 2011; Mikolov et al., 2013) and performing composition over the learned word vectors for classification (Collobert et al., 2011). Word vectors, wherein words are projected from a sparse, 1-of-V encoding (here V is the vocabulary size) onto a lower dimensional vector space via a hidden layer, are essentially feature extractors that encode semantic features of words in their dimensions. In such dense representations, semantically close words are likewise close—in euclidean or cosine distance—in the lower dimensional vector space.',\n", |
| 160 | + " 'Convolutional neural networks (CNN) utilize layers with convolving filters that are applied to local features (LeCun et al., 1998). Originally invented for computer vision, CNN models have subsequently been shown to be effective for NLP and have achieved excellent results in semantic parsing (Yih et al., 2014), search query retrieval (Shen et al., 2014), sentence modeling (Kalchbrenner et al., 2014), and other traditional NLP tasks (Collobert et al., 2011).',\n", |
| 161 | + " 'In the present work, we train a simple CNN with one layer of convolution on top of word vectors obtained from an unsupervised neural language model. These vectors were trained by Mikolov et al. (2013) on 100 billion words of Google News, and are publicly available. We initially keep the word vectors static and learn only the other parameters of the model. Despite little tuning of hyperparameters, this simple model achieves excellent results on multiple benchmarks, suggesting that the pre-trained vectors are ‘universal’ feature extractors that can be utilized for various classification tasks. Learning task-specific vectors through fine-tuning results in further improvements. We finally describe a simple modification to the architecture to allow for the use of both pre-trained and task-specific vectors by having multiple channels.',\n", |
| 162 | + " 'Our work is philosophically similar to Razavian et al. (2014) which showed that for image classification, feature extractors obtained from a pre-trained deep learning model perform well on a variety of tasks—including tasks that are very different from the original task for which the feature extractors were trained.',\n", |
| 163 | + " 'Model',\n", |
| 164 | + " 'The model architecture, shown in figure 1, is a slight variant of the CNN architecture of Collobert et al. (2011). Let xi ∈ Rk be the k-dimensional word vector corresponding to the i-th word in the sentence. A sentence of length n (padded where necessary).',\n", |
| 165 | + " '1. https://code.google.com/p/word2vec/']}],\n", |
| 166 | + " 'root': <uniflow.node.Node at 0x251484b3010>}]" |
| 167 | + ] |
| 168 | + }, |
| 169 | + "execution_count": 11, |
| 170 | + "metadata": {}, |
| 171 | + "output_type": "execute_result" |
| 172 | + } |
| 173 | + ], |
| 174 | + "source": [ |
| 175 | + "output" |
| 176 | + ] |
| 177 | + }, |
| 178 | + { |
| 179 | + "cell_type": "code", |
| 180 | + "execution_count": null, |
| 181 | + "metadata": {}, |
| 182 | + "outputs": [], |
| 183 | + "source": [] |
| 184 | + } |
| 185 | + ], |
| 186 | + "metadata": { |
| 187 | + "kernelspec": { |
| 188 | + "display_name": "uniflow", |
| 189 | + "language": "python", |
| 190 | + "name": "python3" |
| 191 | + }, |
| 192 | + "language_info": { |
| 193 | + "codemirror_mode": { |
| 194 | + "name": "ipython", |
| 195 | + "version": 3 |
| 196 | + }, |
| 197 | + "file_extension": ".py", |
| 198 | + "mimetype": "text/x-python", |
| 199 | + "name": "python", |
| 200 | + "nbconvert_exporter": "python", |
| 201 | + "pygments_lexer": "ipython3", |
| 202 | + "version": "3.10.13" |
| 203 | + } |
| 204 | + }, |
| 205 | + "nbformat": 4, |
| 206 | + "nbformat_minor": 2 |
| 207 | +} |
0 commit comments