1、下载solr压缩包solr-7.2.1.tgz。
2、解压solr-7.2.1.tgz包。
3、复制solr-7.2.1/server/solr-webapp目录下的webapp文件夹重命名为solr,并复制到jetty/webapps目录下。
4、server/lib/metrics* 开头的5个jar复制到 /usr/local/jetty/webapps/solr/WEB-INF/lib/下。
5、server/lib/ext/下的所有jar复制到 /usr/local/jetty/webapps/solr/WEB-INF/lib/下,都是些日志相关的jar包。
6、solr-7.2.1/dist/solr-dataimporthandler-*的jar复制到 /usr/local/jetty/webapps/solr/WEB-INF/lib/下
7、将server下的solr目录复制到/usr/local/jetty/webapps/solr/WEB-INF下。
8、修改solr工程中的web.xml文件,该部分代码初始时是被注掉的,需要取消注释,将路径改为第7步创建的solrhome的路径:
<env-entry> <env-entry-name>solr/home</env-entry-name> <env-entry-value>/usr/local/jetty/webapps/solr/WEB-INF/solr</env-entry-value> <env-entry-type>java.lang.String</env-entry-type> </env-entry>
9、启动jetty
<schema name="example" version="1.2"> <types> <fieldType name="string" class="solr.StrField" sortMissingLast="true" omitNorms="true"/> <fieldType name="boolean" class="solr.BoolField" sortMissingLast="true" omitNorms="true"/> <fieldtype name="binary" class="solr.BinaryField"/> <fieldType name="int" class="solr.TrieIntField" precisionStep="0" omitNorms="true" positionIncrementGap="0"/> <fieldType name="float" class="solr.TrieFloatField" precisionStep="0" omitNorms="true" positionIncrementGap="0"/> <fieldType name="long" class="solr.TrieLongField" precisionStep="0" omitNorms="true" positionIncrementGap="0"/> <fieldType name="double" class="solr.TrieDoubleField" precisionStep="0" omitNorms="true" positionIncrementGap="0"/> <fieldType name="name_n_gram" class="solr.TextField"> <analyzer type="index"> <tokenizer class="solr.WhitespaceTokenizerFactory" /> <filter class="solr.LowerCaseFilterFactory" /> <filter class="com.kingdee.lucene.analysis.ScriptTokenFilterFactory" /> <filter class="com.kingdee.lucene.analysis.NGramExtensionTokenFilterFactory" minGramSize="1" maxGramSize="40" /> <filter class="com.kingdee.lucene.analysis.MultipleSpellingTokenFilterFactory" /> </analyzer> <analyzer type="query"> <tokenizer class="solr.KeywordTokenizerFactory" /> <filter class="solr.LowerCaseFilterFactory" /> </analyzer> </fieldType> ... </types> ... </schema>
1、schema名字
<schema name="example" version="1.2">
2、fieldType
solr.StrField类型
<fieldTypename="string" class="solr.StrField" sortMissingLast="true" omitNorms="true" />
可选的属性:
StrField类型不被分析,而是被逐字地索引/存储。
solr.TextField
solr.TextField 允许用户通过分析器来定制索引和查询,分析器包括 一个分词器(tokenizer)和多个过滤器(filter)。
positionIncrementGap:可选属性,定义在同一个文档中此类型数据的空白间隔,避免短语匹配错误。
<analyzer type="" isMaxWordLength=“”>为分词器,当type=index时表示在添加索引时使用的分词器,当type="query"时表示在查询时需要使用的分词器。
isMaxWordLength是指分词的细粒度,可以分别制定index索引和query查询的分词细粒度,建议将index的isMaxWordLength设置为false,这样就采用最细分词,是索引更精确,查询时尽量能匹配,而将query的isMaxWordLength设置为true,采用最大分词,这样能够使查询出来的结果更符合用户的需求。
<tokenizer class="solr.WhitespaceTokenizerFactory" />所使用的分词器,如solr.WhitespaceTokenizerFactory就是空格分词。
<filter class="solr.LowerCaseFilterFactory" />为所使用的过滤器,如solr.StopFilterFactory,solr.WordDelimiterFilterFactory,solr.LowerCaseFilterFactory,solr.EnglishPorterFilterFactory,solr.RemoveDuplicatesTokenFilterFactory 这几个过滤器。在使用分词器后才使用过滤器。在向索引库中添加text类型的索引的时候,Solr会首先用空格进行分词,然后把分词结果依次使用指定的过滤器进行过滤,最后剩下的结果才会加入到索引库中以备查询。
3、Fields
filed定义包括name,type(为之前定义过的各种FieldType),indexed(是否被索引),stored(是否被储存),multiValued(是否有多个值)等等。field的定义相当重要,有几个技巧需注意一下,对可能存在多值得字段尽量设置 multiValued属性为true,避免建索引是抛出错误;如果不需要存储相应字段值,尽量将stored属性设为false。
solrconfig.xml,主要定义solr的处理程序(handler)和一些扩展程序,包括索引数据的存放 位置,更新,删除,查询的一些规则配置。
<?xml version="1.0" encoding="UTF-8" ?> <config> <luceneMatchVersion>LUCENE_42</luceneMatchVersion> <dataDir>${solr.data.dir:}</dataDir> <directoryFactory name="DirectoryFactory" class="${solr.directoryFactory:solr.NRTCachingDirectoryFactory}" /> <codecFactory class="solr.SchemaCodecFactory"/> <indexConfig> <filter class="solr.LimitTokenCountFilterFactory" maxTokenCount="10000" /> <writeLockTimeout>1000</writeLockTimeout> <maxIndexingThreads>8</maxIndexingThreads> <useCompoundFile>false</useCompoundFile> <ramBufferSizeMB>32</ramBufferSizeMB> <maxBufferedDocs>1000</maxBufferedDocs> <mergePolicy class="org.apache.lucene.index.TieredMergePolicy"> <int name="maxMergeAtOnce">10</int> <int name="segmentsPerTier">10</int> </mergePolicy> <mergeScheduler class="org.apache.lucene.index.ConcurrentMergeScheduler" /> <lockType>native</lockType> <unlockOnStartup>false</unlockOnStartup> <termIndexInterval>128</termIndexInterval> <reopenReaders>true</reopenReaders> <deletionPolicy class="solr.SolrDeletionPolicy"> <str name="maxCommitsToKeep">1</str> <str name="maxOptimizedCommitsToKeep">0</str> <str name="maxCommitAge">30MINUTES</str> <str name="maxCommitAge">1DAY</str> </deletionPolicy> <infoStream file="INFOSTREAM.txt">false</infoStream> </indexConfig> <updateHandler class="solr.DirectUpdateHandler2"> <autoCommit> <maxTime>15000</maxTime> <openSearcher>false</openSearcher> </autoCommit> <autoSoftCommit> <maxTime>1000</maxTime> </autoSoftCommit> <updateLog> <str name="dir">${solr.data.dir:}</str> </updateLog> </updateHandler> <query> <maxBooleanClauses>1024</maxBooleanClauses> <filterCache class="solr.FastLRUCache" size="512" initialSize="512" autowarmCount="0" /> <queryResultCache class="solr.LRUCache" size="512" initialSize="512" autowarmCount="0" /> <documentCache class="solr.LRUCache" size="512" initialSize="512" autowarmCount="0" /> <fieldValueCache class="solr.FastLRUCache" size="512" autowarmCount="128" showItems="32" /> <enableLazyFieldLoading>true</enableLazyFieldLoading> <queryResultWindowSize>60</queryResultWindowSize> <queryResultMaxDocsCached>200</queryResultMaxDocsCached> <listener event="newSearcher" class="solr.QuerySenderListener"> <arr name="queries"> </arr> </listener> <listener event="firstSearcher" class="solr.QuerySenderListener"> <arr name="queries"> </arr> </listener> <useColdSearcher>false</useColdSearcher> <maxWarmingSearchers>4</maxWarmingSearchers> </query> <requestDispatcher handleSelect="false"> <requestParsers enableRemoteStreaming="false" multipartUploadLimitInKB="2048000" /> <httpCaching never304="true" /> </requestDispatcher> <requestHandler name="/select" class="solr.SearchHandler"> <lst name="defaults"> <str name="echoParams">explicit</str> <int name="rows">10</int> </lst> </requestHandler> <!-- A request handler that returns indented JSON by default --> <requestHandler name="/query" class="solr.SearchHandler"> <lst name="defaults"> <str name="echoParams">explicit</str> <str name="wt">json</str> <str name="indent">true</str> <str name="df">text</str> </lst> </requestHandler> <requestHandler name="/get" class="solr.RealTimeGetHandler"> <lst name="defaults"> <str name="omitHeader">true</str> <str name="wt">json</str> <str name="indent">true</str> </lst> </requestHandler> <requestHandler name="/update" class="solr.UpdateRequestHandler"> </requestHandler> <!-- Solr Cell Update Request Handler http://wiki.apache.org/solr/ExtractingRequestHandler --> <requestHandler name="/update/extract" startup="lazy" class="solr.extraction.ExtractingRequestHandler"> <lst name="defaults"> <str name="lowernames">true</str> <str name="uprefix">ignored_</str> <!-- capture link hrefs but ignore div attributes --> <str name="captureAttr">true</str> <str name="fmap.a">links</str> <str name="fmap.div">ignored_</str> </lst> </requestHandler> <requestHandler name="/analysis/field" startup="lazy" class="solr.FieldAnalysisRequestHandler" /> <requestHandler name="/analysis/document" class="solr.DocumentAnalysisRequestHandler" startup="lazy" /> <requestHandler name="/admin/" class="solr.admin.AdminHandlers" /> <requestHandler name="/admin/ping" class="solr.PingRequestHandler"> <lst name="invariants"> <str name="q">solrpingquery</str> </lst> <lst name="defaults"> <str name="echoParams">all</str> </lst> </requestHandler> <!-- Echo the request contents back to the client --> <requestHandler name="/debug/dump" class="solr.DumpRequestHandler"> <lst name="defaults"> <str name="echoParams">explicit</str> <str name="echoHandler">true</str> </lst> </requestHandler> <requestHandler name="/replication" class="solr.ReplicationHandler" /> <searchComponent name="spellcheck" class="solr.SpellCheckComponent"> <str name="queryAnalyzerFieldType">textSpell</str> <!-- Multiple "Spell Checkers" can be declared and used by this component --> <!-- a spellchecker built from a field of the main index --> <lst name="spellchecker"> <str name="name">default</str> <str name="field">name</str> <str name="classname">solr.DirectSolrSpellChecker</str> <!-- the spellcheck distance measure used, the default is the internal levenshtein --> <str name="distanceMeasure">internal</str> <!-- minimum accuracy needed to be considered a valid spellcheck suggestion --> <float name="accuracy">0.5</float> <!-- the maximum #edits we consider when enumerating terms: can be 1 or 2 --> <int name="maxEdits">2</int> <!-- the minimum shared prefix when enumerating terms --> <int name="minPrefix">1</int> <!-- maximum number of inspections per result. --> <int name="maxInspections">5</int> <!-- minimum length of a query term to be considered for correction --> <int name="minQueryLength">4</int> <!-- maximum threshold of documents a query term can appear to be considered for correction --> <float name="maxQueryFrequency">0.01</float> </lst> <!-- a spellchecker that can break or combine words. See "/spell" handler below for usage --> <lst name="spellchecker"> <str name="name">wordbreak</str> <str name="classname">solr.WordBreakSolrSpellChecker</str> <str name="field">name</str> <str name="combineWords">true</str> <str name="breakWords">true</str> <int name="maxChanges">10</int> </lst> </searchComponent> <requestHandler name="/spell" class="solr.SearchHandler" startup="lazy"> <lst name="defaults"> <str name="df">text</str> <!-- Solr will use suggestions from both the ‘default‘ spellchecker and from the ‘wordbreak‘ spellchecker and combine them. collations (re-written queries) can include a combination of corrections from both spellcheckers --> <str name="spellcheck.dictionary">default</str> <str name="spellcheck.dictionary">wordbreak</str> <str name="spellcheck">on</str> <str name="spellcheck.extendedResults">true</str> <str name="spellcheck.count">10</str> <str name="spellcheck.alternativeTermCount">5</str> <str name="spellcheck.maxResultsForSuggest">5</str> <str name="spellcheck.collate">true</str> <str name="spellcheck.collateExtendedResults">true</str> <str name="spellcheck.maxCollationTries">10</str> <str name="spellcheck.maxCollations">5</str> </lst> <arr name="last-components"> <str>spellcheck</str> </arr> </requestHandler> <!-- Term Vector Component http://wiki.apache.org/solr/TermVectorComponent --> <searchComponent name="tvComponent" class="solr.TermVectorComponent" /> <!-- A request handler for demonstrating the term vector component This is purely as an example. In reality you will likely want to add the component to your already specified request handlers. --> <requestHandler name="/tvrh" class="solr.SearchHandler" startup="lazy"> <lst name="defaults"> <str name="df">text</str> <bool name="tv">true</bool> </lst> <arr name="last-components"> <str>tvComponent</str> </arr> </requestHandler> <requestHandler name="/dataimport" class="org.apache.solr.handler.dataimport.DataImportHandler"> <lst name="defaults"> <str name="config">data-config.xml</str> </lst> </requestHandler> <searchComponent name="clustering" enable="${solr.clustering.enabled:false}" class="solr.clustering.ClusteringComponent"> <!-- Declare an engine --> <lst name="engine"> <!-- The name, only one can be named "default" --> <str name="name">default</str> <!-- Class name of Carrot2 clustering algorithm. Currently available algorithms are: * org.carrot2.clustering.lingo.LingoClusteringAlgorithm * org.carrot2.clustering.stc.STCClusteringAlgorithm * org.carrot2.clustering.kmeans.BisectingKMeansClusteringAlgorithm See http://project.carrot2.org/algorithms.html for the algorithm‘s characteristics. --> <str name="carrot.algorithm">org.carrot2.clustering.lingo.LingoClusteringAlgorithm </str> <!-- Overriding values for Carrot2 default algorithm attributes. For a description of all available attributes, see: http://download.carrot2.org/stable/manual/#chapter.components. Use attribute key as name attribute of str elements below. These can be further overridden for individual requests by specifying attribute key as request parameter name and attribute value as parameter value. --> <str name="LingoClusteringAlgorithm.desiredClusterCountBase">20</str> <!-- Location of Carrot2 lexical resources. A directory from which to load Carrot2-specific stop words and stop labels. Absolute or relative to Solr config directory. If a specific resource (e.g. stopwords.en) is present in the specified dir, it will completely override the corresponding default one that ships with Carrot2. For an overview of Carrot2 lexical resources, see: http://download.carrot2.org/head/manual/#chapter.lexical-resources --> <str name="carrot.lexicalResourcesDir">clustering/carrot2</str> <!-- The language to assume for the documents. For a list of allowed values, see: http://download.carrot2.org/stable/manual/#section.attribute.lingo.MultilingualClustering.defaultLanguage --> <str name="MultilingualClustering.defaultLanguage">ENGLISH</str> </lst> <lst name="engine"> <str name="name">stc</str> <str name="carrot.algorithm">org.carrot2.clustering.stc.STCClusteringAlgorithm</str> </lst> </searchComponent> <searchComponent name="terms" class="solr.TermsComponent" /> <searchComponent class="solr.HighlightComponent" name="highlight"> <highlighting> <!-- Configure the standard fragmenter --> <!-- This could most likely be commented out in the "default" case --> <fragmenter name="gap" default="true" class="solr.highlight.GapFragmenter"> <lst name="defaults"> <int name="hl.fragsize">100</int> </lst> </fragmenter> <!-- A regular-expression-based fragmenter (for sentence extraction) --> <fragmenter name="regex" class="solr.highlight.RegexFragmenter"> <lst name="defaults"> <!-- slightly smaller fragsizes work better because of slop --> <int name="hl.fragsize">70</int> <!-- allow 50% slop on fragment sizes --> <float name="hl.regex.slop">0.5</float> <!-- a basic sentence pattern --> <str name="hl.regex.pattern">[-\w ,/\n\"‘]{20,200}</str> </lst> </fragmenter> <!-- Configure the standard formatter --> <formatter name="html" default="true" class="solr.highlight.HtmlFormatter"> <lst name="defaults"> <str name="hl.simple.pre"><![CDATA[<em class="highlight">]]></str> <str name="hl.simple.post"><![CDATA[</em>]]></str> </lst> </formatter> <!-- Configure the standard encoder --> <encoder name="html" class="solr.highlight.HtmlEncoder" /> <!-- Configure the standard fragListBuilder --> <fragListBuilder name="simple" class="solr.highlight.SimpleFragListBuilder" /> <!-- Configure the single fragListBuilder --> <fragListBuilder name="single" class="solr.highlight.SingleFragListBuilder" /> <!-- Configure the weighted fragListBuilder --> <fragListBuilder name="weighted" default="true" class="solr.highlight.WeightedFragListBuilder" /> <!-- default tag FragmentsBuilder --> <fragmentsBuilder name="default" default="true" class="com.kingdee.solr.highlight.ScoreOrderFragmentsBuilder"> <!-- <lst name="defaults"> <str name="hl.multiValuedSeparatorChar">/</str> </lst> --> </fragmentsBuilder> <boundaryScanner name="default" default="true" class="solr.highlight.SimpleBoundaryScanner"> <lst name="defaults"> <str name="hl.bs.maxScan">10</str> <str name="hl.bs.chars">.,!? </str> </lst> </boundaryScanner> <boundaryScanner name="breakIterator" class="solr.highlight.BreakIteratorBoundaryScanner"> <lst name="defaults"> <!-- type should be one of CHARACTER, WORD(default), LINE and SENTENCE --> <str name="hl.bs.type">WORD</str> <!-- language and country are used when constructing Locale object. --> <!-- And the Locale object will be used when getting instance of BreakIterator --> <str name="hl.bs.language">en</str> <str name="hl.bs.country">US</str> </lst> </boundaryScanner> </highlighting> </searchComponent> <queryResponseWriter name="xml" default="true" class="solr.XMLResponseWriter" /> <queryResponseWriter name="json" class="solr.JSONResponseWriter" /> <queryResponseWriter name="python" class="solr.PythonResponseWriter" /> <queryResponseWriter name="ruby" class="solr.RubyResponseWriter" /> <queryResponseWriter name="php" class="solr.PHPResponseWriter" /> <queryResponseWriter name="phps" class="solr.PHPSerializedResponseWriter" /> <queryResponseWriter name="csv" class="solr.CSVResponseWriter" /> <queryResponseWriter name="velocity" class="solr.VelocityResponseWriter" startup="lazy" /> <queryResponseWriter name="xslt" class="solr.XSLTResponseWriter"> <int name="xsltCacheLifetimeSeconds">5</int> </queryResponseWriter> <!-- Legacy config for the admin interface --> <admin> <defaultQuery>*:*</defaultQuery> </admin> </config>
<dataDir>${solr.data.dir:d:/Server/Solr/data}</dataDir> 定义了索引数据和日 志文件的存放位置
<luceneMatchVersion>4.10.1</luceneMatchVersion> 表 示 solr 底 层 使 用 的 是 lucene4.8
<lib dir="../../../contrib/extraction/lib"regex=".*\.jar"/> 表示 solr 引用包的位置, 当 dir 对应的目录不存在时候,会忽略此属性
索引存储方案,共有以下存储方案
1、 solr.StandardDirectoryFactory,这是一个基于文件系统存储目录的工厂,它会试 图选择最好的实现基于你当前的操作系统和 Java 虚拟机版本。
2、 solr.SimpleFSDirectoryFactory,适用于小型应用程序,不支持大数据和多线程。
3、 solr.NIOFSDirectoryFactory,适用于多线程环境,但是不适用在 windows 平台 (很慢),是因为 JVM 还存在 bug。
4、 solr.MMapDirectoryFactory,这个是 solr3.1 到 4.0 版本在 linux64 位系统下默认 的实现。它是通过使用虚拟内存和内核特性调用 mmap 去访问存储在磁盘中 的索引文件。它允许 lucene 或 solr 直接访问 I/O 缓存。如果不需要近实时搜 索功能,使用此工厂是个不错的方案。
5、 solr.NRTCachingDirectoryFactory,此工厂设计目的是存储部分索引在内存中, 从而加快了近实时搜索的速度。
6、 solr.RAMDirectoryFactory,这是一个内存存储方案,不能持久化存储,在系统 重启或服务器 crash 时数据会丢失。且不支持索引复制。
编解码工厂允许使用自定义的编解码器。例如:如果想启动 per-fieldDocValues 格式, 可以在 solrconfig.xml 里面设置 SchemaCodecFactory: docValuesFormat="Lucene42": 这是默认设置,所有数据会被加载到堆内存中。 docValuesFormat="Disk": 这是另外一个实现,将部分数据存储在磁盘上。 docValuesFormat="SimpleText": 文本格式,非常慢,用于学习。
用于设置索引的低级别的属性
1、<filter class="solr.LimitTokenCountFilterFactory" maxTokenCount="10000"/>//限 制 token 最大长度
2、<writeLockTimeout>1000</writeLockTimeout>//IndexWriter 等待解锁的最长时 间(毫秒)。
3、<maxIndexingThreads>8</maxIndexingThreads>//
4、<useCompoundFile>false</useCompoundFile>//solr 默认为 false。如果为 true, 索引文件减少,检索性能降低,追求平衡。
5、<ramBufferSizeMB>100</ramBufferSizeMB>//缓存
6、<maxBufferedDocs>1000</maxBufferedDocs>//同上。两个同时定义时命中较低 的那个。
7、<mergePolicyclass="org.apache.lucene.index.TieredMergePolicy"> <intname="maxMergeAtOnce">10</int> <intname="segmentsPerTier">10</int> </mergePolicy> //合并策略。
8、<mergeFactor>10</mergeFactor>//合并因子,每次合并多少个 segments。
9 、 <mergeScheduler class="org.apache.lucene.index.ConcurrentMergeScheduler"/>//合并调度器。
10、<lockType>${solr.lock.type:native}</lockType>//锁工厂。
11、<unlockOnStartup>false</unlockOnStartup>//是否启动时先解锁。
12、<termIndexInterval>128</termIndexInterval>//Lucene loadsterms intomemory 间隔
13、<reopenReaders>true</reopenReaders>//重新打开,替代先关闭-再打开。
14、<deletionPolicy class="solr.SolrDeletionPolicy">//提交删除策略,必须实现 org.apache.lucene.index.IndexDeletionPolicy
15、<strname="maxCommitsToKeep">1</str>
16、<strname="maxOptimizedCommitsToKeep">0</str>
17 、 <str name="maxCommitAge">30MINUTES</str> OR <str name="maxCommitAge">1DAY</str><br>
18、 <infoStream file="INFOSTREAM.txt">false</infoStream>//相当于把创建索 引时的日志输出。
<lockType>${solr.lock.type:native}</lockType>
设置索引库的锁方式,主要有三种:
1. single:适用于只读的索引库,即索引库是定死的,不会再更改
2. native:使用本地操作系统的文件锁方式,不能用于多个 solr 服务共用同一个 索引库。Solr3.6 及后期版本使用的默认锁机制。
3. simple:使用简单的文件锁机制
<updateLog> <strname="dir">${solr.ulog.dir:}</str> </updateLog> 设置索引库更新日志,默认路径为 solrhome 下面的 data/tlog。
随着索引库的频 繁更新,tlog 文件会越来越大,所以建议提交索引时采用硬提交方式 <autoCommit>,即批量提交。
自动硬提交方式:maxTime:设置多长时间提交一次 maxDocs:
设置达到多少文档 提交一次 openSearcher:文档提交后是否开启新的 searcher,如果 false,文档只是 提交到 index 索引库,搜索结果中搜不到此次提交的文档;如果 true,既提交到 index索引库,也能在搜索结果中搜到此次提交的内容。
<maxBooleanClauses>1024</maxBooleanClauses>
设置 boolean 查询中,最大条件数。在范围搜索或者前缀搜索时,会产生大量的 boolean 条件,如果条件数达到这个数值时,将抛出异常,限制这个条件数,可 以防止条件过多查询等待时间过长。
<!--RequestDispatcher 主要是介绍当有请求访问 SolrCore 时 SolrDispatchFilter 如何处理。 handleSelect 是一个以前版本中遗留下来的属性,会影响请求的对应行为(比 如/select?qt=XXX)。 当 handleSelect="true"时导致 SolrDispatchFilter 将请求转发给 qt 指定的处理 器(前提是/select 已经注册)。 当 handleSelect="false"时会直接访问/select,若/select 未注册则为 404。 --> <requestDispatcherhandleSelect="false">
<!--RequestParsing:请求解析 这些设置说明 SolrRequests 如何被解析,以及对 ContentStreams 有什么 限制。
enableRemoteStreaming- 是否允许使用stream.file和stream.url参数来 指定远程 streams。
multipartUploadLimitInKB- 指定多文件上传时 Solr 允许的最大的 size。
formdataUploadLimitInKB- 表单通过 POST 请求发送的最大 size --> <requestParsersenableRemoteStreaming="true" multipartUploadLimitInKB="2048000" formdataUploadLimitInKB="2048"/>
<!--HTTPCaching 设置 HTTP 缓存的相关参数。 --> <httpCachingnever304="true"/>
<!-<httpCachingnever304="true"> <cacheControl>max-age=30,public</cacheControl> </httpCaching> -->
<!-<httpCachinglastModifiedFrom="openTime" etagSeed="Solr"> <cacheControl>max-age=30,public</cacheControl> </httpCaching> --> </requestDispatcher>
原文:https://www.cnblogs.com/youzhongmin/p/8716803.html