Skip to content

Commit

Permalink
Revert "Website & API Doc site generator using DocFx script (apache#206
Browse files Browse the repository at this point in the history
…)"

This reverts commit 0d56d20.
  • Loading branch information
NightOwl888 committed Jul 8, 2019
1 parent 1a91955 commit 345e311
Show file tree
Hide file tree
Showing 171 changed files with 793 additions and 3,423 deletions.
11 changes: 1 addition & 10 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -49,13 +49,4 @@ release/
.tools/

# NUnit test result file produced by nunit3-console.exe
[Tt]est[Rr]esult.xml
websites/**/_site/*
websites/**/tools/*
websites/**/_exported_templates/*
websites/**/api/.manifest
websites/**/docfx.log
websites/**/lucenetemplate/plugins/*
websites/apidocs/api/**/*.yml
websites/apidocs/api/**/*.manifest
!websites/apidocs/api/toc.yml
[Tt]est[Rr]esult.xml
11 changes: 1 addition & 10 deletions Lucene.Net.sln
Original file line number Diff line number Diff line change
Expand Up @@ -112,15 +112,6 @@ Project("{9A19103F-16F7-4668-BE54-9A1E7A4F7556}") = "Lucene.Net.Tests.Join", "sr
EndProject
Project("{9A19103F-16F7-4668-BE54-9A1E7A4F7556}") = "Lucene.Net.Tests.Memory", "src\Lucene.Net.Tests.Memory\Lucene.Net.Tests.Memory.csproj", "{3BE7B6EA-8DBC-45E2-947C-1CA7E63B5603}"
EndProject
Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "apidocs", "apidocs", "{58FD6E39-F30F-4566-90E5-B7C9D6BC0660}"
ProjectSection(SolutionItems) = preProject
apidocs\docfx.filter.yml = apidocs\docfx.filter.yml
apidocs\docfx.json = apidocs\docfx.json
apidocs\docs.ps1 = apidocs\docs.ps1
apidocs\index.md = apidocs\index.md
apidocs\toc.yml = apidocs\toc.yml
EndProjectSection
EndProject
Project("{9A19103F-16F7-4668-BE54-9A1E7A4F7556}") = "Lucene.Net.Tests.Misc", "src\Lucene.Net.Tests.Misc\Lucene.Net.Tests.Misc.csproj", "{F8DDC5B7-A621-4B67-AB4B-BBE083C05BB8}"
EndProject
Project("{9A19103F-16F7-4668-BE54-9A1E7A4F7556}") = "Lucene.Net.Tests.Queries", "src\Lucene.Net.Tests.Queries\Lucene.Net.Tests.Queries.csproj", "{AC750DC0-05A3-4F96-8CC5-CFC8FD01D4CF}"
Expand Down Expand Up @@ -366,8 +357,8 @@ Global
HideSolutionNode = FALSE
EndGlobalSection
GlobalSection(NestedProjects) = preSolution
{4DF7EACE-2B25-43F6-B558-8520BF20BD76} = {8CA61D33-3590-4024-A304-7B1F75B50653}
{EFB2E31A-5917-49D5-A808-FE5061A550B4} = {8CA61D33-3590-4024-A304-7B1F75B50653}
{4DF7EACE-2B25-43F6-B558-8520BF20BD76} = {8CA61D33-3590-4024-A304-7B1F75B50653}
{119BBACD-D4DB-4E3B-922F-3DA83E0B29E2} = {4DF7EACE-2B25-43F6-B558-8520BF20BD76}
{CF3A74CA-FEFD-4F41-961B-CC8CF8D96286} = {8CA61D33-3590-4024-A304-7B1F75B50653}
{4B054831-5275-44E2-A4D4-CA0B19BEE19A} = {8CA61D33-3590-4024-A304-7B1F75B50653}
Expand Down
2 changes: 1 addition & 1 deletion src/Lucene.Net.Analysis.Common/Analysis/Cjk/package.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@
limitations under the License.
-->


<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">

Analyzer for Chinese, Japanese, and Korean, which indexes bigrams.
This analyzer generates bigram terms, which are overlapping groups of two adjacent Han, Hiragana, Katakana, or Hangul characters.
Expand Down
2 changes: 1 addition & 1 deletion src/Lucene.Net.Analysis.Common/Analysis/Cn/package.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@
limitations under the License.
-->


<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">

Analyzer for Chinese, which indexes unigrams (individual chinese characters).

Expand Down
8 changes: 4 additions & 4 deletions src/Lucene.Net.Analysis.Common/Analysis/Compound/package.md
Original file line number Diff line number Diff line change
Expand Up @@ -74,8 +74,8 @@ filter available:

#### HyphenationCompoundWordTokenFilter

The [
HyphenationCompoundWordTokenFilter](xref:Lucene.Net.Analysis.Compound.HyphenationCompoundWordTokenFilter) uses hyphenation grammars to find
The [](xref:Lucene.Net.Analysis.Compound.HyphenationCompoundWordTokenFilter
HyphenationCompoundWordTokenFilter) uses hyphenation grammars to find
potential subwords that a worth to check against the dictionary. It can be used
without a dictionary as well but then produces a lot of "nonword" tokens.
The quality of the output tokens is directly connected to the quality of the
Expand All @@ -101,8 +101,8 @@ Credits for the hyphenation code go to the

#### DictionaryCompoundWordTokenFilter

The [
DictionaryCompoundWordTokenFilter](xref:Lucene.Net.Analysis.Compound.DictionaryCompoundWordTokenFilter) uses a dictionary-only approach to
The [](xref:Lucene.Net.Analysis.Compound.DictionaryCompoundWordTokenFilter
DictionaryCompoundWordTokenFilter) uses a dictionary-only approach to
find subwords in a compound word. It is much slower than the one that
uses the hyphenation grammars. You can use it as a first start to
see if your dictionary is good or not because it is much simpler in design.
Expand Down
11 changes: 7 additions & 4 deletions src/Lucene.Net.Analysis.Common/Analysis/Payloads/package.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,8 +15,11 @@
See the License for the specific language governing permissions and
limitations under the License.
-->



<HTML>
<HEAD>
<TITLE>org.apache.lucene.analysis.payloads</TITLE>
</HEAD>
<BODY>
Provides various convenience classes for creating payloads on Tokens.

</BODY>
</HTML>
15 changes: 9 additions & 6 deletions src/Lucene.Net.Analysis.Common/Analysis/Sinks/package.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,10 +15,13 @@
See the License for the specific language governing permissions and
limitations under the License.
-->



<xref:Lucene.Net.Analysis.Sinks.TeeSinkTokenFilter> and implementations
of <xref:Lucene.Net.Analysis.Sinks.TeeSinkTokenFilter.SinkFilter> that
<HTML>
<HEAD>
<TITLE>org.apache.lucene.analysis.sinks</TITLE>
</HEAD>
<BODY>
[](xref:Lucene.Net.Analysis.Sinks.TeeSinkTokenFilter) and implementations
of [](xref:Lucene.Net.Analysis.Sinks.TeeSinkTokenFilter.SinkFilter) that
might be useful.

</BODY>
</HTML>
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@
limitations under the License.
-->

<xref:Lucene.Net.Analysis.TokenFilter> and <xref:Lucene.Net.Analysis.Analyzer> implementations that use Snowball
[](xref:Lucene.Net.Analysis.TokenFilter) and [](xref:Lucene.Net.Analysis.Analyzer) implementations that use Snowball
stemmers.

This project provides pre-compiled version of the Snowball stemmers based on revision 500 of the Tartarus Snowball repository, together with classes integrating them with the Lucene search engine.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,4 +16,4 @@
limitations under the License.
-->

Backwards-compatible implementation to match [#LUCENE_31](xref:Lucene.Net.Util.Version)
Backwards-compatible implementation to match [](xref:Lucene.Net.Util.Version.LUCENE_31)
Original file line number Diff line number Diff line change
Expand Up @@ -16,4 +16,4 @@
limitations under the License.
-->

Backwards-compatible implementation to match [#LUCENE_34](xref:Lucene.Net.Util.Version)
Backwards-compatible implementation to match [](xref:Lucene.Net.Util.Version.LUCENE_34)
Original file line number Diff line number Diff line change
Expand Up @@ -16,4 +16,4 @@
limitations under the License.
-->

Backwards-compatible implementation to match [#LUCENE_36](xref:Lucene.Net.Util.Version)
Backwards-compatible implementation to match [](xref:Lucene.Net.Util.Version.LUCENE_36)
Original file line number Diff line number Diff line change
Expand Up @@ -16,4 +16,4 @@
limitations under the License.
-->

Backwards-compatible implementation to match [#LUCENE_40](xref:Lucene.Net.Util.Version)
Backwards-compatible implementation to match [](xref:Lucene.Net.Util.Version.LUCENE_40)
38 changes: 19 additions & 19 deletions src/Lucene.Net.Analysis.Common/Analysis/Standard/package.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,40 +20,40 @@

The `org.apache.lucene.analysis.standard` package contains three fast grammar-based tokenizers constructed with JFlex:

* <xref:Lucene.Net.Analysis.Standard.StandardTokenizer>:
* [](xref:Lucene.Net.Analysis.Standard.StandardTokenizer):
as of Lucene 3.1, implements the Word Break rules from the Unicode Text
Segmentation algorithm, as specified in
[Unicode Standard Annex #29](http://unicode.org/reports/tr29/).
Unlike `UAX29URLEmailTokenizer`, URLs and email addresses are
**not** tokenized as single tokens, but are instead split up into
tokens according to the UAX#29 word break rules.

[StandardAnalyzer](xref:Lucene.Net.Analysis.Standard.StandardAnalyzer) includes
[StandardTokenizer](xref:Lucene.Net.Analysis.Standard.StandardTokenizer),
[StandardFilter](xref:Lucene.Net.Analysis.Standard.StandardFilter),
[LowerCaseFilter](xref:Lucene.Net.Analysis.Core.LowerCaseFilter)
and [StopFilter](xref:Lucene.Net.Analysis.Core.StopFilter).
[](xref:Lucene.Net.Analysis.Standard.StandardAnalyzer StandardAnalyzer) includes
[](xref:Lucene.Net.Analysis.Standard.StandardTokenizer StandardTokenizer),
[](xref:Lucene.Net.Analysis.Standard.StandardFilter StandardFilter),
[](xref:Lucene.Net.Analysis.Core.LowerCaseFilter LowerCaseFilter)
and [](xref:Lucene.Net.Analysis.Core.StopFilter StopFilter).
When the `Version` specified in the constructor is lower than
3.1, the [ClassicTokenizer](xref:Lucene.Net.Analysis.Standard.ClassicTokenizer)
3.1, the [](xref:Lucene.Net.Analysis.Standard.ClassicTokenizer ClassicTokenizer)
implementation is invoked.
* [ClassicTokenizer](xref:Lucene.Net.Analysis.Standard.ClassicTokenizer):
* [](xref:Lucene.Net.Analysis.Standard.ClassicTokenizer ClassicTokenizer):
this class was formerly (prior to Lucene 3.1) named
`StandardTokenizer`. (Its tokenization rules are not
based on the Unicode Text Segmentation algorithm.)
[ClassicAnalyzer](xref:Lucene.Net.Analysis.Standard.ClassicAnalyzer) includes
[ClassicTokenizer](xref:Lucene.Net.Analysis.Standard.ClassicTokenizer),
[StandardFilter](xref:Lucene.Net.Analysis.Standard.StandardFilter),
[LowerCaseFilter](xref:Lucene.Net.Analysis.Core.LowerCaseFilter)
and [StopFilter](xref:Lucene.Net.Analysis.Core.StopFilter).
[](xref:Lucene.Net.Analysis.Standard.ClassicAnalyzer ClassicAnalyzer) includes
[](xref:Lucene.Net.Analysis.Standard.ClassicTokenizer ClassicTokenizer),
[](xref:Lucene.Net.Analysis.Standard.StandardFilter StandardFilter),
[](xref:Lucene.Net.Analysis.Core.LowerCaseFilter LowerCaseFilter)
and [](xref:Lucene.Net.Analysis.Core.StopFilter StopFilter).

* [UAX29URLEmailTokenizer](xref:Lucene.Net.Analysis.Standard.UAX29URLEmailTokenizer):
* [](xref:Lucene.Net.Analysis.Standard.UAX29URLEmailTokenizer UAX29URLEmailTokenizer):
implements the Word Break rules from the Unicode Text Segmentation
algorithm, as specified in
[Unicode Standard Annex #29](http://unicode.org/reports/tr29/).
URLs and email addresses are also tokenized according to the relevant RFCs.

[UAX29URLEmailAnalyzer](xref:Lucene.Net.Analysis.Standard.UAX29URLEmailAnalyzer) includes
[UAX29URLEmailTokenizer](xref:Lucene.Net.Analysis.Standard.UAX29URLEmailTokenizer),
[StandardFilter](xref:Lucene.Net.Analysis.Standard.StandardFilter),
[LowerCaseFilter](xref:Lucene.Net.Analysis.Core.LowerCaseFilter)
and [StopFilter](xref:Lucene.Net.Analysis.Core.StopFilter).
[](xref:Lucene.Net.Analysis.Standard.UAX29URLEmailAnalyzer UAX29URLEmailAnalyzer) includes
[](xref:Lucene.Net.Analysis.Standard.UAX29URLEmailTokenizer UAX29URLEmailTokenizer),
[](xref:Lucene.Net.Analysis.Standard.StandardFilter StandardFilter),
[](xref:Lucene.Net.Analysis.Core.LowerCaseFilter LowerCaseFilter)
and [](xref:Lucene.Net.Analysis.Core.StopFilter StopFilter).
Original file line number Diff line number Diff line change
Expand Up @@ -16,4 +16,4 @@
limitations under the License.
-->

Custom <xref:Lucene.Net.Util.AttributeImpl> for indexing collation keys as index terms.
Custom [](xref:Lucene.Net.Util.AttributeImpl) for indexing collation keys as index terms.
4 changes: 2 additions & 2 deletions src/Lucene.Net.Analysis.Common/Collation/package.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,8 +28,8 @@
very slow.)

* Effective Locale-specific normalization (case differences, diacritics, etc.).
(<xref:Lucene.Net.Analysis.Core.LowerCaseFilter> and
<xref:Lucene.Net.Analysis.Miscellaneous.ASCIIFoldingFilter> provide these services
([](xref:Lucene.Net.Analysis.Core.LowerCaseFilter) and
[](xref:Lucene.Net.Analysis.Miscellaneous.ASCIIFoldingFilter) provide these services
in a generic way that doesn't take into account locale-specific needs.)

## Example Usages
Expand Down
11 changes: 3 additions & 8 deletions src/Lucene.Net.Analysis.Common/overview.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,4 @@
---
uid: Lucene.Net.Analysis.Common
summary: *content
---

<!--
<!--
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
Expand All @@ -22,6 +17,6 @@ summary: *content

Analyzers for indexing content in different languages and domains.

For an introduction to Lucene's analysis API, see the <xref:Lucene.Net.Analysis> package documentation.
For an introduction to Lucene's analysis API, see the [](xref:Lucene.Net.Analysis) package documentation.

This module contains concrete components (<xref:Lucene.Net.Analysis.CharFilter>s, <xref:Lucene.Net.Analysis.Tokenizer>s, and (<xref:Lucene.Net.Analysis.TokenFilter>s) for analyzing different types of content. It also provides a number of <xref:Lucene.Net.Analysis.Analyzer>s for different languages that you can use to get started quickly.
This module contains concrete components ([](xref:Lucene.Net.Analysis.CharFilter)s, [](xref:Lucene.Net.Analysis.Tokenizer)s, and ([](xref:Lucene.Net.Analysis.TokenFilter)s) for analyzing different types of content. It also provides a number of [](xref:Lucene.Net.Analysis.Analyzer)s for different languages that you can use to get started quickly.
Original file line number Diff line number Diff line change
Expand Up @@ -16,4 +16,4 @@
limitations under the License.
-->

Custom <xref:Lucene.Net.Util.AttributeImpl> for indexing collation keys as index terms.
Custom [](xref:Lucene.Net.Util.AttributeImpl) for indexing collation keys as index terms.
21 changes: 9 additions & 12 deletions src/Lucene.Net.Analysis.ICU/overview.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,4 @@
---
uid: Lucene.Net.Analysis.Icu
summary: *content
---

<!--
<!--
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
Expand All @@ -21,16 +16,18 @@ summary: *content
-->
<!-- :Post-Release-Update-Version.LUCENE_XY: - several mentions in this file -->



<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<title>
Apache Lucene ICU integration module
</title>

This module exposes functionality from
[ICU](http://site.icu-project.org/) to Apache Lucene. ICU4J is a Java
library that enhances Java's internationalization support by improving
performance, keeping current with the Unicode Standard, and providing richer
APIs.

For an introduction to Lucene's analysis API, see the <xref:Lucene.Net.Analysis> package documentation.
For an introduction to Lucene's analysis API, see the [](xref:Lucene.Net.Analysis) package documentation.

This module exposes the following functionality:

Expand Down Expand Up @@ -87,8 +84,8 @@ For an introduction to Lucene's analysis API, see the <xref:Lucene.Net.Analysis>
very slow.)

* Effective Locale-specific normalization (case differences, diacritics, etc.).
(<xref:Lucene.Net.Analysis.Core.LowerCaseFilter> and
<xref:Lucene.Net.Analysis.Miscellaneous.ASCIIFoldingFilter> provide these services
([](xref:Lucene.Net.Analysis.Core.LowerCaseFilter) and
[](xref:Lucene.Net.Analysis.Miscellaneous.ASCIIFoldingFilter) provide these services
in a generic way that doesn't take into account locale-specific needs.)

## Example Usages
Expand Down Expand Up @@ -269,7 +266,7 @@ For an introduction to Lucene's analysis API, see the <xref:Lucene.Net.Analysis>

# [Backwards Compatibility]()

This module exists to provide up-to-date Unicode functionality that supports the most recent version of Unicode (currently 6.3). However, some users who wish for stronger backwards compatibility can restrict <xref:Lucene.Net.Analysis.Icu.ICUNormalizer2Filter> to operate on only a specific Unicode Version by using a {@link com.ibm.icu.text.FilteredNormalizer2}.
This module exists to provide up-to-date Unicode functionality that supports the most recent version of Unicode (currently 6.3). However, some users who wish for stronger backwards compatibility can restrict [](xref:Lucene.Net.Analysis.Icu.ICUNormalizer2Filter) to operate on only a specific Unicode Version by using a {@link com.ibm.icu.text.FilteredNormalizer2}.

## Example Usages

Expand Down
13 changes: 5 additions & 8 deletions src/Lucene.Net.Analysis.Kuromoji/overview.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,4 @@
---
uid: Lucene.Net.Analysis.Kuromoji
summary: *content
---

<!--
<!--
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
Expand All @@ -20,10 +15,12 @@ summary: *content
limitations under the License.
-->


<title>
Apache Lucene Kuromoji Analyzer
</title>

Kuromoji is a morphological analyzer for Japanese text.

This module provides support for Japanese text analysis, including features such as part-of-speech tagging, lemmatization, and compound word analysis.

For an introduction to Lucene's analysis API, see the <xref:Lucene.Net.Analysis> package documentation.
For an introduction to Lucene's analysis API, see the [](xref:Lucene.Net.Analysis) package documentation.
13 changes: 5 additions & 8 deletions src/Lucene.Net.Analysis.Phonetic/overview.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,4 @@
---
uid: Lucene.Net.Analysis.Phonetic
summary: *content
---

<!--
<!--
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
Expand All @@ -20,10 +15,12 @@ summary: *content
limitations under the License.
-->


<title>
analyzers-phonetic
</title>

Analysis for indexing phonetic signatures (for sounds-alike search)

For an introduction to Lucene's analysis API, see the <xref:Lucene.Net.Analysis> package documentation.
For an introduction to Lucene's analysis API, see the [](xref:Lucene.Net.Analysis) package documentation.

This module provides analysis components (using encoders from [Apache Commons Codec](http://commons.apache.org/codec/)) that index and search phonetic signatures.
2 changes: 1 addition & 1 deletion src/Lucene.Net.Analysis.SmartCn/HHMM/package.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@
limitations under the License.
-->


<META http-equiv="Content-Type" content="text/html; charset=UTF-8">

SmartChineseAnalyzer Hidden Markov Model package.
@lucene.experimental
Loading

0 comments on commit 345e311

Please sign in to comment.