<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>小生这厢有礼了(BioFaceBook Personal Blog)</title>
	<atom:link href="http://www.biofacebook.com/?feed=rss2" rel="self" type="application/rss+xml" />
	<link>http://www.biofacebook.com</link>
	<description>记录生物信息学点滴足迹（NGS,Genome,Meta,Linux)</description>
	<lastBuildDate>Sun, 23 Aug 2020 03:28:53 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>https://wordpress.org/?v=4.1.41</generator>
	<item>
		<title>Metagenomics standard operating procedure v2</title>
		<link>http://www.biofacebook.com/?p=1527</link>
		<comments>http://www.biofacebook.com/?p=1527#comments</comments>
		<pubDate>Sun, 23 Aug 2020 03:28:53 +0000</pubDate>
		<dc:creator><![CDATA[szypanther]]></dc:creator>
				<category><![CDATA[三代测序]]></category>
		<category><![CDATA[二代测序]]></category>
		<category><![CDATA[生物信息]]></category>

		<guid isPermaLink="false">http://www.biofacebook.com/?p=1527</guid>
		<description><![CDATA[<p>http://www.360doc.com/content/20/0823/10/71250389_931761136.shtml</p> <p>https://github.com/LangilleLab/microbiome_helper/wiki/Metagenomics-standard-operating-procedure-v2</p> <p>Note that this workflow is continually being updated. If you want to use the below commands be sure to keep track of them locally.</p> <p>Last updated: 9 Oct 2018 (see revisions above for earlier minor versions and Metagenomics SOPs on the SOP for earlier major versions)</p> <p>Important points to keep in mind:</p> The [...]]]></description>
				<content:encoded><![CDATA[<p><strong><em>http://www.360doc.com/content/20/0823/10/71250389_931761136.shtml</em></strong></p>
<p>https://github.com/LangilleLab/microbiome_helper/wiki/Metagenomics-standard-operating-procedure-v2</p>
<p><em>Note that this workflow is continually being updated. If you want to use the below commands be sure to keep track of them locally.</em></p>
<p><em>Last updated: 9 Oct 2018 (see <code>revisions</code> above for earlier minor versions and <code>Metagenomics SOPs</code> on the SOP for earlier major versions)</em></p>
<p>Important points to keep in mind:</p>
<ul>
<li>The below options are not necessarily best for your data; it is important to explore different options, especially at the read quality filtering stage.</li>
<li>If you are confused about a particular step be sure to check out the pages under <code>Metagenomic Resources</code> on the right side-bar.</li>
<li>You should check that only the FASTQs of interest are being specified at each step (e.g. with the <code>ls</code> command). If you are re-running commands multiple times or using optional steps the input filenames and/or folders could differ.</li>
<li>The tool <code>parallel</code> comes up several times in this workflow. Be sure to check out our tutorial on this tool <a href="https://github.com/LangilleLab/microbiome_helper/wiki/Quick-Introduction-to-GNU-Parallel">here</a>.</li>
<li>You can always run <code>parallel</code> with the <code>--dry-run</code> option to see what commands are being run to double-check they are doing what you want!</li>
</ul>
<p>This workflow starts with raw paired-end MiSeq or NextSeq data in demultiplexed FASTQ format assumed to be located within a folder called <code>raw_data</code>.</p>
<h2><a id="user-content-1-first-steps" class="anchor" href="https://github.com/LangilleLab/microbiome_helper/wiki/Metagenomics-standard-operating-procedure-v2#1-first-steps"></a>1. First Steps</h2>
<h3><a id="user-content-11-join-lanes-together-all-illumina-except-miseq" class="anchor" href="https://github.com/LangilleLab/microbiome_helper/wiki/Metagenomics-standard-operating-procedure-v2#11-join-lanes-together-all-illumina-except-miseq"></a>1.1 Join lanes together (all Illumina, except MiSeq)</h3>
<p>Concatenate multiple lanes of sequencing together (e.g. for NextSeq data). If you do <strong>NOT</strong> do this step, remember to change <code>cat_lanes</code> to <code>raw_data</code> in the further commands below that have assumed you are working with the most common type of metagenome data.</p>
<pre><code>concat_lanes.pl raw_data/* -o cat_lanes -p 4
</code></pre>
<h3><a id="user-content-12-inspect-read-quality" class="anchor" href="https://github.com/LangilleLab/microbiome_helper/wiki/Metagenomics-standard-operating-procedure-v2#12-inspect-read-quality"></a>1.2 Inspect read quality</h3>
<p>Run <code>fastqc</code> to allow manual inspection of the quality of sequences.</p>
<pre><code>mkdir fastqc_out
fastqc -t 4 cat_lanes/* -o fastqc_out/
</code></pre>
<h3><a id="user-content-13-optional-stitch-fr-reads" class="anchor" href="https://github.com/LangilleLab/microbiome_helper/wiki/Metagenomics-standard-operating-procedure-v2#13-optional-stitch-fr-reads"></a>1.3 (Optional) Stitch F+R reads</h3>
<p>Stitch paired-end reads together (summary of stitching results are written to &#8220;pear_summary_log.txt&#8221;). Note: it is important to check the % of reads assembled. It may be better to concatenate the forward and reverse reads together if the assembly % is too low (see step 2.2).</p>
<pre><code>run_pear.pl -p 4 -o stitched_reads cat_lanes/*
</code></pre>
<p><strong><em>If you don&#8217;t stitch your reads together at this step you will need to unzip your FASTQ files before continuing with the below commands.</em></strong></p>
<h2><a id="user-content-2-read-quality-control-and-contaminant-screens" class="anchor" href="https://github.com/LangilleLab/microbiome_helper/wiki/Metagenomics-standard-operating-procedure-v2#2-read-quality-control-and-contaminant-screens"></a>2. Read Quality-Control and Contaminant Screens</h2>
<h3><a id="user-content-21-running-kneaddata" class="anchor" href="https://github.com/LangilleLab/microbiome_helper/wiki/Metagenomics-standard-operating-procedure-v2#21-running-kneaddata"></a>2.1 Running KneadData</h3>
<p>Use <code>kneaddata</code> to run pre-processing tools. First Trimmomatic is run to remove low quality sequences. Then Bowtie2 is run to screen out contaminant sequences. Below we are screening out reads that map to the human or PhiX genomes. Note KneadData is being run below on all unstitched FASTQ pairs with <code>parallel</code>, you can see our quick tutorial on this tool <a href="https://github.com/LangilleLab/microbiome_helper/wiki/Quick-Introduction-to-GNU-Parallel">here</a>. For a detailed breakdown of the options in the below command see <a href="https://github.com/LangilleLab/microbiome_helper/wiki/Metagenomics-Sequencing-Pre-processing">this page</a>. The forward and reverse reads will be specified by &#8220;_1&#8243; and &#8220;_2&#8243; in the output files, ignore the &#8220;R1&#8243; in each filename. Note that the <code>\</code> characters at the end of each line are just to split the command over multiple lines to make it easier to read.</p>
<pre><code>parallel -j 1 --link 'kneaddata -i {1} -i {2} -o kneaddata_out/ \
-db /home/shared/bowtiedb/GRCh38_PhiX --trimmomatic /usr/local/prg/Trimmomatic-0.36/ \
-t 4 --trimmomatic-options "SLIDINGWINDOW:4:20 MINLEN:50" \
--bowtie2-options "--very-sensitive --dovetail" --remove-intermediate-output' \
 ::: cat_lanes/*_R1.fastq ::: cat_lanes/*_R2.fastq
</code></pre>
<p>Clean up the output directory (helps downstream commands) by moving the discarded sequences to a subfolder:</p>
<pre><code>mkdir kneaddata_out/contam_seq

mv kneaddata_out/*_contam*.fastq kneaddata_out/contam_seq
</code></pre>
<p>You can produce a logfile summarizing the kneadData output with this command:</p>
<pre><code>kneaddata_read_count_table --input kneaddata_out --output kneaddata_read_counts.txt
</code></pre>
<h3><a id="user-content-22-concatenate-unstitched-output-omit-if-data-stitched-above" class="anchor" href="https://github.com/LangilleLab/microbiome_helper/wiki/Metagenomics-standard-operating-procedure-v2#22-concatenate-unstitched-output-omit-if-data-stitched-above"></a>2.2 Concatenate unstitched output (Omit if data stitched above)</h3>
<p>If you did not stitch your paired-end reads together with <code>pear</code>, then you can concatenate the forward and reverse FASTQs together now. Note this is done after quality filtering so that both reads in a pair are either discarded or retained. It&#8217;s important to only specify the &#8220;paired&#8221; FASTQs outputted by <code>kneaddata</code> in the below command.</p>
<pre><code>concat_paired_end.pl -p 4 --no_R_match -o cat_reads kneaddata_out/*_paired_*.fastq 
</code></pre>
<p>You should check over the commands that are printed to screen to make sure the correct FASTQs are being concatenated together.</p>
<h2><a id="user-content-3-determine-functions-with-humann2" class="anchor" href="https://github.com/LangilleLab/microbiome_helper/wiki/Metagenomics-standard-operating-procedure-v2#3-determine-functions-with-humann2"></a>3. Determine Functions with HUMAnN2</h2>
<h3><a id="user-content-31-run-humann2" class="anchor" href="https://github.com/LangilleLab/microbiome_helper/wiki/Metagenomics-standard-operating-procedure-v2#31-run-humann2"></a>3.1 Run HUMAnN2</h3>
<p>Run <code>humann2</code> with <code>parallel</code> to calculate abundance of UniRef90 gene families and MetaCyc pathways. If you are processing environment data (e.g. soil samples) the vast majority of the reads may not map using this approach. Instead, you can try mapping against the UniRef50 database (which you can point to with the <code>--protein-database</code> option).</p>
<pre><code>mkdir humann2_out

parallel -j 4 'humann2 --threads 1 --input {} --output humann2_out/{/.}' ::: cat_reads/*fastq
</code></pre>
<h3><a id="user-content-32-merge-individual-sample-data-together" class="anchor" href="https://github.com/LangilleLab/microbiome_helper/wiki/Metagenomics-standard-operating-procedure-v2#32-merge-individual-sample-data-together"></a>3.2 Merge individual sample data together</h3>
<p>Join HUMAnN2 output per sample into one table.</p>
<pre><code>mkdir humann2_final_out

humann2_join_tables -s --input humann2_out/ --file_name pathabundance --output humann2_final_out/humann2_pathabundance.tsv
humann2_join_tables -s --input humann2_out/ --file_name pathcoverage --output humann2_final_out/humann2_pathcoverage.tsv
humann2_join_tables -s --input humann2_out/ --file_name genefamilies --output humann2_final_out/humann2_genefamilies.tsv
</code></pre>
<h3><a id="user-content-33-table-output-normalization" class="anchor" href="https://github.com/LangilleLab/microbiome_helper/wiki/Metagenomics-standard-operating-procedure-v2#33-table-output-normalization"></a>3.3 Table output normalization</h3>
<p>Re-normalize gene family and pathway <strong>abundances</strong> (so that all samples are in units of copies per million).</p>
<pre><code>humann2_renorm_table --input humann2_final_out/humann2_pathabundance.tsv --units cpm --output humann2_final_out/humann2_pathabundance_cpm.tsv
humann2_renorm_table --input humann2_final_out/humann2_genefamilies.tsv --units cpm --output humann2_final_out/humann2_genefamilies_cpm.tsv
</code></pre>
<h3><a id="user-content-34-separate-out-taxonomic-contributions" class="anchor" href="https://github.com/LangilleLab/microbiome_helper/wiki/Metagenomics-standard-operating-procedure-v2#34-separate-out-taxonomic-contributions"></a>3.4 Separate out taxonomic contributions</h3>
<p>Split HUMAnN2 output abundance tables in stratified and unstratified tables (stratified tables include the taxa associated with a functional profile).</p>
<pre><code>humann2_split_stratified_table --input humann2_final_out/humann2_pathabundance_cpm.tsv --output humann2_final_out
humann2_split_stratified_table --input humann2_final_out/humann2_genefamilies_cpm.tsv --output humann2_final_out
humann2_split_stratified_table --input humann2_final_out/humann2_pathcoverage.tsv --output humann2_final_out
</code></pre>
<h3><a id="user-content-35-format-stamp-function-file" class="anchor" href="https://github.com/LangilleLab/microbiome_helper/wiki/Metagenomics-standard-operating-procedure-v2#35-format-stamp-function-file"></a>3.5 Format STAMP function file</h3>
<p>Convert unstratified HUMAnN2 abundance tables to STAMP format by changing header-line. These commands remove the comment character and the spaces in the name of the first column. Trailing descriptions of the abundance datatype are also removed from each sample&#8217;s column name.</p>
<pre><code>sed 's/_Abundance-RPKs//g' humann2_final_out/humann2_genefamilies_cpm_unstratified.tsv | sed 's/# Gene Family/GeneFamily/' &gt; humann2_final_out/humann2_genefamilies_cpm_unstratified.spf
sed 's/_Abundance//g' humann2_final_out/humann2_pathabundance_cpm_unstratified.tsv | sed 's/# Pathway/Pathway/' &gt; humann2_final_out/humann2_pathabundance_cpm_unstratified.spf
sed 's/_Coverage//g' humann2_final_out/humann2_pathcoverage_unstratified.tsv | sed 's/# Pathway/Pathway/' &gt; humann2_final_out/humann2_pathcoverage_unstratified.spf
</code></pre>
<h3><a id="user-content-36-extract-metaphlan2-taxonomic-compositions" class="anchor" href="https://github.com/LangilleLab/microbiome_helper/wiki/Metagenomics-standard-operating-procedure-v2#36-extract-metaphlan2-taxonomic-compositions"></a>3.6 Extract MetaPhlAn2 taxonomic compositions</h3>
<p>Since HUMAnN2 also runs MetaPhlAn2 as an initial step, we can use the output tables already created to get the taxonomic composition of our samples. First we need to gather all the output MetaPhlAn2 results per sample into a single directory and then merge them into a single table using MetaPhlAn2&#8217;s <code>merge_metaphlan_tables.py</code> command. After this file is created we can fix the header so that each column corresponds to a sample name without the trailing &#8220;_metaphlan_bugs_list&#8221; description. Note that MetaPhlAn2 works best for well-characterized environments, like the human gut, and has low sensitivity in other environments.</p>
<pre><code>mkdir metaphlan2_out
cp humann2_out/*/*/*metaphlan_bugs_list.tsv metaphlan2_out/
/usr/local/metaphlan2/utils/merge_metaphlan_tables.py metaphlan2_out/*metaphlan_bugs_list.tsv &gt; metaphlan2_merged.txt
sed -i 's/_metaphlan_bugs_list//g' metaphlan2_merged.txt
</code></pre>
<h3><a id="user-content-37-format-stamp-taxonomy-file" class="anchor" href="https://github.com/LangilleLab/microbiome_helper/wiki/Metagenomics-standard-operating-procedure-v2#37-format-stamp-taxonomy-file"></a>3.7 Format STAMP taxonomy file</h3>
<p>Lastly we can convert this MetaPhlAn2 abundance table to STAMP format</p>
<pre><code>metaphlan_to_stamp.pl metaphlan2_merged.txt &gt; metaphlan2_merged.spf</code></pre>
]]></content:encoded>
			<wfw:commentRss>http://www.biofacebook.com/?feed=rss2&#038;p=1527</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>肠型分析学习</title>
		<link>http://www.biofacebook.com/?p=1521</link>
		<comments>http://www.biofacebook.com/?p=1521#comments</comments>
		<pubDate>Wed, 01 Jul 2020 05:24:02 +0000</pubDate>
		<dc:creator><![CDATA[szypanther]]></dc:creator>
				<category><![CDATA[Linux相关]]></category>
		<category><![CDATA[生物信息]]></category>

		<guid isPermaLink="false">http://www.biofacebook.com/?p=1521</guid>
		<description><![CDATA[<p>肠型，Enterotype，是2011年在这篇文章中提出的，即将过去的2018年又有20多们肠道微生物的大佬对肠型的概念进行了回顾和确认。一直比较好奇怎样来用代码分析肠型，今天找到了这个教程，放在这：</p> <p>这是那篇原始的文章：Arumugam, M., Raes, J., et al. (2011) Enterotypes of the human gut microbiome, Nature,doi://10.1038/nature09944 在谷歌上一搜，作者竟然做了个分析肠型的教程在这，学习一下：http://enterotyping.embl.de/enterotypes.html 这是2018年大佬们的共识文章：这是国人翻译的这篇文章，http://blog.sciencenet.cn/blog-3334560-1096828.html 当然，如果你只需要获得自己的结果或者自己课题的结果，不需要跑代码的，有最新的网页版分型，更好用，网址也放在这，同样也是上面翻译的那篇文章里提到的网址：http://enterotypes.org/ 只需要把菌属的含量比例文件上就能很快得到结果。</p> <p>下面我就边学习边做来尝试着来个分析，并把代码放在这里备忘。其实作者已经整理好了代码，我学习一下，争取实现对手上的数据进行分析。</p> 首先下载测试数据， wget http://enterotyping.embl.de/MetaHIT_SangerSamples.genus.txt wget http://enterotyping.embl.de/enterotypes_tutorial.sanger.R 跑跑示例数据，排排错 <p>我表示对R语言还只是一知半解的状态，所以，先跑下，然后能用上自己的数据， 当个工具用就暂知足啦。我是黑苹果10.11的系统，运行这个软件提示少了Xquartz，于是装了个，windows和linux应该不需要。原代码中还提示『没有&#8221;s.class&#8221;这个函数』，百度了一下发现有个老兄的新浪博客说了是这个包，于是加了句library(ade4)就ok了。 Xquartz的下载地址Mac 10.6+：https://dl.bintray.com/xquartz/downloads/XQuartz-2.7.11.dmg</p> <p>&#160;</p> #Uncomment next two lines if R packages are already installed #install.packages("cluster") #install.packages("clusterSim") library(cluster) library(clusterSim) #BiocManager::install("genefilter") library(ade4)#Download the example data and set the working directory [...]]]></description>
				<content:encoded><![CDATA[<p>肠型，Enterotype，是2011年在这篇文章中提出的，即将过去的2018年又有20多们肠道微生物的大佬对肠型的概念进行了回顾和确认。一直比较好奇怎样来用代码分析肠型，今天找到了这个教程，放在这：</p>
<blockquote><p><em>这是那篇原始的文章：Arumugam, M., Raes, J., et al. (2011) Enterotypes of the human gut microbiome, Nature,doi://10.1038/nature09944 在谷歌上一搜，作者竟然做了个分析肠型的教程在这，学习一下：http://enterotyping.embl.de/enterotypes.html 这是2018年大佬们的共识文章：这是国人翻译的这篇文章，http://blog.sciencenet.cn/blog-3334560-1096828.html 当然，如果你只需要获得自己的结果或者自己课题的结果，不需要跑代码的，有最新的网页版分型，更好用，网址也放在这，同样也是上面翻译的那篇文章里提到的网址：http://enterotypes.org/ 只需要把菌属的含量比例文件上就能很快得到结果。</em></p></blockquote>
<p>下面我就边学习边做来尝试着来个分析，并把代码放在这里备忘。其实作者已经整理好了代码，我学习一下，争取实现对手上的数据进行分析。</p>
<h2 id="%E9%A6%96%E5%85%88%E4%B8%8B%E8%BD%BD%E6%B5%8B%E8%AF%95%E6%95%B0%E6%8D%AE%EF%BC%8C"><strong>首先下载测试数据，</strong></h2>
<pre class="prism-token token  language-javascript">wget http<span class="token punctuation">:</span><span class="token operator">/</span><span class="token operator">/</span>enterotyping<span class="token punctuation">.</span>embl<span class="token punctuation">.</span>de<span class="token operator">/</span>MetaHIT_SangerSamples<span class="token punctuation">.</span>genus<span class="token punctuation">.</span>txt
wget http<span class="token punctuation">:</span><span class="token operator">/</span><span class="token operator">/</span>enterotyping<span class="token punctuation">.</span>embl<span class="token punctuation">.</span>de<span class="token operator">/</span>enterotypes_tutorial<span class="token punctuation">.</span>sanger<span class="token punctuation">.</span>R</pre>
<h2 id="%E8%B7%91%E8%B7%91%E7%A4%BA%E4%BE%8B%E6%95%B0%E6%8D%AE%EF%BC%8C%E6%8E%92%E6%8E%92%E9%94%99"><strong>跑跑示例数据，排排错</strong></h2>
<p>我表示对R语言还只是一知半解的状态，所以，先跑下，然后能用上自己的数据， 当个工具用就暂知足啦。我是黑苹果10.11的系统，运行这个软件提示少了Xquartz，于是装了个，windows和linux应该不需要。原代码中还提示『没有&#8221;s.class&#8221;这个函数』，百度了一下发现有个老兄的新浪博客说了是这个包，于是加了句<code>library(ade4)</code>就ok了。 Xquartz的下载地址Mac 10.6+：https://dl.bintray.com/xquartz/downloads/XQuartz-2.7.11.dmg</p>
<p>&nbsp;</p>
<pre class="prism-token token  language-javascript">#Uncomment next two lines <span class="token keyword">if</span> R packages are already installed
#install<span class="token punctuation">.</span><span class="token function">packages</span><span class="token punctuation">(</span><span class="token string">"cluster"</span><span class="token punctuation">)</span>
#install<span class="token punctuation">.</span><span class="token function">packages</span><span class="token punctuation">(</span><span class="token string">"clusterSim"</span><span class="token punctuation">)</span>
<span class="token function">library</span><span class="token punctuation">(</span>cluster<span class="token punctuation">)</span>
<span class="token function">library</span><span class="token punctuation">(</span>clusterSim<span class="token punctuation">)</span>
#BiocManager<span class="token punctuation">:</span><span class="token punctuation">:</span><span class="token function">install</span><span class="token punctuation">(</span><span class="token string">"genefilter"</span><span class="token punctuation">)</span>
<span class="token function">library</span><span class="token punctuation">(</span>ade4<span class="token punctuation">)</span>#Download the example data and <span class="token keyword">set</span> the working directory
#<span class="token function">setwd</span><span class="token punctuation">(</span><span class="token string">'&lt;path_to_working_directory&gt;'</span><span class="token punctuation">)</span>
data<span class="token operator">=</span>read<span class="token punctuation">.</span><span class="token function">table</span><span class="token punctuation">(</span><span class="token string">"../MetaHIT_SangerSamples.genus.txt"</span><span class="token punctuation">,</span> header<span class="token operator">=</span>T<span class="token punctuation">,</span> row<span class="token punctuation">.</span>names<span class="token operator">=</span><span class="token number">1</span><span class="token punctuation">,</span> dec<span class="token operator">=</span><span class="token string">"."</span><span class="token punctuation">,</span> sep<span class="token operator">=</span><span class="token string">"\t"</span><span class="token punctuation">)</span>
data<span class="token operator">=</span>data<span class="token punctuation">[</span><span class="token operator">-</span><span class="token number">1</span><span class="token punctuation">,</span><span class="token punctuation">]</span>dist<span class="token punctuation">.</span>JSD <span class="token operator">&lt;</span><span class="token operator">-</span> <span class="token keyword">function</span><span class="token punctuation">(</span>inMatrix<span class="token punctuation">,</span> pseudocount<span class="token operator">=</span><span class="token number">0.000001</span><span class="token punctuation">,</span> <span class="token operator">...</span><span class="token punctuation">)</span> <span class="token punctuation">{</span>
 KLD <span class="token operator">&lt;</span><span class="token operator">-</span> <span class="token keyword">function</span><span class="token punctuation">(</span>x<span class="token punctuation">,</span>y<span class="token punctuation">)</span> <span class="token function">sum</span><span class="token punctuation">(</span>x <span class="token operator">*</span><span class="token function">log</span><span class="token punctuation">(</span>x<span class="token operator">/</span>y<span class="token punctuation">)</span><span class="token punctuation">)</span>
 JSD<span class="token operator">&lt;</span><span class="token operator">-</span> <span class="token keyword">function</span><span class="token punctuation">(</span>x<span class="token punctuation">,</span>y<span class="token punctuation">)</span> <span class="token function">sqrt</span><span class="token punctuation">(</span><span class="token number">0.5</span> <span class="token operator">*</span> <span class="token function">KLD</span><span class="token punctuation">(</span>x<span class="token punctuation">,</span> <span class="token punctuation">(</span>x<span class="token operator">+</span>y<span class="token punctuation">)</span><span class="token operator">/</span><span class="token number">2</span><span class="token punctuation">)</span> <span class="token operator">+</span> <span class="token number">0.5</span> <span class="token operator">*</span> <span class="token function">KLD</span><span class="token punctuation">(</span>y<span class="token punctuation">,</span> <span class="token punctuation">(</span>x<span class="token operator">+</span>y<span class="token punctuation">)</span><span class="token operator">/</span><span class="token number">2</span><span class="token punctuation">)</span><span class="token punctuation">)</span>
 matrixColSize <span class="token operator">&lt;</span><span class="token operator">-</span> <span class="token function">length</span><span class="token punctuation">(</span><span class="token function">colnames</span><span class="token punctuation">(</span>inMatrix<span class="token punctuation">)</span><span class="token punctuation">)</span>
 matrixRowSize <span class="token operator">&lt;</span><span class="token operator">-</span> <span class="token function">length</span><span class="token punctuation">(</span><span class="token function">rownames</span><span class="token punctuation">(</span>inMatrix<span class="token punctuation">)</span><span class="token punctuation">)</span>
 colnames <span class="token operator">&lt;</span><span class="token operator">-</span> <span class="token function">colnames</span><span class="token punctuation">(</span>inMatrix<span class="token punctuation">)</span>
 resultsMatrix <span class="token operator">&lt;</span><span class="token operator">-</span> <span class="token function">matrix</span><span class="token punctuation">(</span><span class="token number">0</span><span class="token punctuation">,</span> matrixColSize<span class="token punctuation">,</span> matrixColSize<span class="token punctuation">)</span> inMatrix <span class="token operator">=</span> <span class="token function">apply</span><span class="token punctuation">(</span>inMatrix<span class="token punctuation">,</span><span class="token number">1</span><span class="token punctuation">:</span><span class="token number">2</span><span class="token punctuation">,</span><span class="token keyword">function</span><span class="token punctuation">(</span>x<span class="token punctuation">)</span> <span class="token function">ifelse</span> <span class="token punctuation">(</span>x<span class="token operator">==</span><span class="token number">0</span><span class="token punctuation">,</span>pseudocount<span class="token punctuation">,</span>x<span class="token punctuation">)</span><span class="token punctuation">)</span> <span class="token keyword">for</span><span class="token punctuation">(</span>i <span class="token keyword">in</span> <span class="token number">1</span><span class="token punctuation">:</span>matrixColSize<span class="token punctuation">)</span> <span class="token punctuation">{</span>
   <span class="token keyword">for</span><span class="token punctuation">(</span>j <span class="token keyword">in</span> <span class="token number">1</span><span class="token punctuation">:</span>matrixColSize<span class="token punctuation">)</span> <span class="token punctuation">{</span>
     resultsMatrix<span class="token punctuation">[</span>i<span class="token punctuation">,</span>j<span class="token punctuation">]</span><span class="token operator">=</span><span class="token function">JSD</span><span class="token punctuation">(</span><span class="token keyword">as</span><span class="token punctuation">.</span><span class="token function">vector</span><span class="token punctuation">(</span>inMatrix<span class="token punctuation">[</span><span class="token punctuation">,</span>i<span class="token punctuation">]</span><span class="token punctuation">)</span><span class="token punctuation">,</span>
                            <span class="token keyword">as</span><span class="token punctuation">.</span><span class="token function">vector</span><span class="token punctuation">(</span>inMatrix<span class="token punctuation">[</span><span class="token punctuation">,</span>j<span class="token punctuation">]</span><span class="token punctuation">)</span><span class="token punctuation">)</span>
   <span class="token punctuation">}</span>
 <span class="token punctuation">}</span>
 colnames <span class="token operator">-</span><span class="token operator">&gt;</span> <span class="token function">colnames</span><span class="token punctuation">(</span>resultsMatrix<span class="token punctuation">)</span> <span class="token operator">-</span><span class="token operator">&gt;</span> <span class="token function">rownames</span><span class="token punctuation">(</span>resultsMatrix<span class="token punctuation">)</span>
 <span class="token keyword">as</span><span class="token punctuation">.</span><span class="token function">dist</span><span class="token punctuation">(</span>resultsMatrix<span class="token punctuation">)</span><span class="token operator">-</span><span class="token operator">&gt;</span>resultsMatrix
 <span class="token function">attr</span><span class="token punctuation">(</span>resultsMatrix<span class="token punctuation">,</span> <span class="token string">"method"</span><span class="token punctuation">)</span> <span class="token operator">&lt;</span><span class="token operator">-</span> <span class="token string">"dist"</span>
 <span class="token keyword">return</span><span class="token punctuation">(</span>resultsMatrix<span class="token punctuation">)</span>
<span class="token punctuation">}</span>data<span class="token punctuation">.</span>dist<span class="token operator">=</span>dist<span class="token punctuation">.</span><span class="token function">JSD</span><span class="token punctuation">(</span>data<span class="token punctuation">)</span>pam<span class="token punctuation">.</span><span class="token function-variable function">clustering</span><span class="token operator">=</span><span class="token keyword">function</span><span class="token punctuation">(</span>x<span class="token punctuation">,</span>k<span class="token punctuation">)</span> <span class="token punctuation">{</span> # x is a distance matrix and k the number <span class="token keyword">of</span> clusters
 <span class="token function">require</span><span class="token punctuation">(</span>cluster<span class="token punctuation">)</span>
 cluster <span class="token operator">=</span> <span class="token keyword">as</span><span class="token punctuation">.</span><span class="token function">vector</span><span class="token punctuation">(</span><span class="token function">pam</span><span class="token punctuation">(</span><span class="token keyword">as</span><span class="token punctuation">.</span><span class="token function">dist</span><span class="token punctuation">(</span>x<span class="token punctuation">)</span><span class="token punctuation">,</span> k<span class="token punctuation">,</span> diss<span class="token operator">=</span>TRUE<span class="token punctuation">)</span>$clustering<span class="token punctuation">)</span>
 <span class="token keyword">return</span><span class="token punctuation">(</span>cluster<span class="token punctuation">)</span>
<span class="token punctuation">}</span>data<span class="token punctuation">.</span>cluster<span class="token operator">=</span>pam<span class="token punctuation">.</span><span class="token function">clustering</span><span class="token punctuation">(</span>data<span class="token punctuation">.</span>dist<span class="token punctuation">,</span> k<span class="token operator">=</span><span class="token number">3</span><span class="token punctuation">)</span><span class="token function">require</span><span class="token punctuation">(</span>clusterSim<span class="token punctuation">)</span>
nclusters <span class="token operator">=</span> index<span class="token punctuation">.</span><span class="token function">G1</span><span class="token punctuation">(</span><span class="token function">t</span><span class="token punctuation">(</span>data<span class="token punctuation">)</span><span class="token punctuation">,</span> data<span class="token punctuation">.</span>cluster<span class="token punctuation">,</span> d <span class="token operator">=</span> data<span class="token punctuation">.</span>dist<span class="token punctuation">,</span> centrotypes <span class="token operator">=</span> <span class="token string">"medoids"</span><span class="token punctuation">)</span>nclusters<span class="token operator">=</span><span class="token function">NULLfor</span> <span class="token punctuation">(</span>k <span class="token keyword">in</span> <span class="token number">1</span><span class="token punctuation">:</span><span class="token number">20</span><span class="token punctuation">)</span> <span class="token punctuation">{</span>
 <span class="token keyword">if</span> <span class="token punctuation">(</span>k<span class="token operator">==</span><span class="token number">1</span><span class="token punctuation">)</span> <span class="token punctuation">{</span>
   nclusters<span class="token punctuation">[</span>k<span class="token punctuation">]</span><span class="token operator">=</span>NA
 <span class="token punctuation">}</span> <span class="token keyword">else</span> <span class="token punctuation">{</span>
   data<span class="token punctuation">.</span>cluster_temp<span class="token operator">=</span>pam<span class="token punctuation">.</span><span class="token function">clustering</span><span class="token punctuation">(</span>data<span class="token punctuation">.</span>dist<span class="token punctuation">,</span> k<span class="token punctuation">)</span>
   nclusters<span class="token punctuation">[</span>k<span class="token punctuation">]</span><span class="token operator">=</span>index<span class="token punctuation">.</span><span class="token function">G1</span><span class="token punctuation">(</span><span class="token function">t</span><span class="token punctuation">(</span>data<span class="token punctuation">)</span><span class="token punctuation">,</span>data<span class="token punctuation">.</span>cluster_temp<span class="token punctuation">,</span>  d <span class="token operator">=</span> data<span class="token punctuation">.</span>dist<span class="token punctuation">,</span>
                         centrotypes <span class="token operator">=</span> <span class="token string">"medoids"</span><span class="token punctuation">)</span>
 <span class="token punctuation">}</span>
<span class="token punctuation">}</span><span class="token function">plot</span><span class="token punctuation">(</span>nclusters<span class="token punctuation">,</span> type<span class="token operator">=</span><span class="token string">"h"</span><span class="token punctuation">,</span> xlab<span class="token operator">=</span><span class="token string">"k clusters"</span><span class="token punctuation">,</span> ylab<span class="token operator">=</span><span class="token string">"CH index"</span><span class="token punctuation">,</span>main<span class="token operator">=</span><span class="token string">"Optimal number of clusters"</span><span class="token punctuation">)</span>obs<span class="token punctuation">.</span>silhouette<span class="token operator">=</span><span class="token function">mean</span><span class="token punctuation">(</span><span class="token function">silhouette</span><span class="token punctuation">(</span>data<span class="token punctuation">.</span>cluster<span class="token punctuation">,</span> data<span class="token punctuation">.</span>dist<span class="token punctuation">)</span><span class="token punctuation">[</span><span class="token punctuation">,</span><span class="token number">3</span><span class="token punctuation">]</span><span class="token punctuation">)</span>
<span class="token function">cat</span><span class="token punctuation">(</span>obs<span class="token punctuation">.</span>silhouette<span class="token punctuation">)</span> #<span class="token number">0.1899451</span>#data<span class="token operator">=</span>noise<span class="token punctuation">.</span><span class="token function">removal</span><span class="token punctuation">(</span>data<span class="token punctuation">,</span> percent<span class="token operator">=</span><span class="token number">0.01</span><span class="token punctuation">)</span>## plot <span class="token number">1</span>
obs<span class="token punctuation">.</span>pca<span class="token operator">=</span>dudi<span class="token punctuation">.</span><span class="token function">pca</span><span class="token punctuation">(</span>data<span class="token punctuation">.</span><span class="token function">frame</span><span class="token punctuation">(</span><span class="token function">t</span><span class="token punctuation">(</span>data<span class="token punctuation">)</span><span class="token punctuation">)</span><span class="token punctuation">,</span> scannf<span class="token operator">=</span>F<span class="token punctuation">,</span> nf<span class="token operator">=</span><span class="token number">10</span><span class="token punctuation">)</span>
obs<span class="token punctuation">.</span>bet<span class="token operator">=</span><span class="token function">bca</span><span class="token punctuation">(</span>obs<span class="token punctuation">.</span>pca<span class="token punctuation">,</span> fac<span class="token operator">=</span><span class="token keyword">as</span><span class="token punctuation">.</span><span class="token function">factor</span><span class="token punctuation">(</span>data<span class="token punctuation">.</span>cluster<span class="token punctuation">)</span><span class="token punctuation">,</span> scannf<span class="token operator">=</span>F<span class="token punctuation">,</span> nf<span class="token operator">=</span>k<span class="token number">-1</span><span class="token punctuation">)</span>
dev<span class="token punctuation">.</span><span class="token keyword">new</span><span class="token punctuation">(</span><span class="token punctuation">)</span>
s<span class="token punctuation">.</span><span class="token keyword">class</span><span class="token punctuation">(</span>obs<span class="token punctuation">.</span>bet$ls<span class="token punctuation">,</span> fac<span class="token operator">=</span><span class="token keyword">as</span><span class="token punctuation">.</span><span class="token function">factor</span><span class="token punctuation">(</span>data<span class="token punctuation">.</span>cluster<span class="token punctuation">)</span><span class="token punctuation">,</span> grid<span class="token operator">=</span>F<span class="token punctuation">,</span>sub<span class="token operator">=</span><span class="token string">"Between-class analysis"</span><span class="token punctuation">,</span> col<span class="token operator">=</span><span class="token function">c</span><span class="token punctuation">(</span><span class="token number">3</span><span class="token punctuation">,</span><span class="token number">2</span><span class="token punctuation">,</span><span class="token number">4</span><span class="token punctuation">)</span><span class="token punctuation">)</span>#plot <span class="token number">2</span>
obs<span class="token punctuation">.</span>pcoa<span class="token operator">=</span>dudi<span class="token punctuation">.</span><span class="token function">pco</span><span class="token punctuation">(</span>data<span class="token punctuation">.</span>dist<span class="token punctuation">,</span> scannf<span class="token operator">=</span>F<span class="token punctuation">,</span> nf<span class="token operator">=</span><span class="token number">3</span><span class="token punctuation">)</span>
dev<span class="token punctuation">.</span><span class="token keyword">new</span><span class="token punctuation">(</span><span class="token punctuation">)</span>
s<span class="token punctuation">.</span><span class="token keyword">class</span><span class="token punctuation">(</span>obs<span class="token punctuation">.</span>pcoa$li<span class="token punctuation">,</span> fac<span class="token operator">=</span><span class="token keyword">as</span><span class="token punctuation">.</span><span class="token function">factor</span><span class="token punctuation">(</span>data<span class="token punctuation">.</span>cluster<span class="token punctuation">)</span><span class="token punctuation">,</span> grid<span class="token operator">=</span>F<span class="token punctuation">,</span>sub<span class="token operator">=</span><span class="token string">"Principal coordiante analysis"</span><span class="token punctuation">,</span> col<span class="token operator">=</span><span class="token function">c</span><span class="token punctuation">(</span><span class="token number">3</span><span class="token punctuation">,</span><span class="token number">2</span><span class="token punctuation">,</span><span class="token number">4</span><span class="token punctuation">)</span><span class="token punctuation">)</span></pre>
<h2 id="%E4%B8%8A%E5%9B%BE%EF%BC%8C%E7%A8%8D%E5%BE%AE%E8%B0%83%E6%95%B4%E4%B8%8B"><strong>上图，稍微调整下</strong></h2>
<p><code>, col=c(3,2,4)</code>这个代码是给三个聚类上不同的颜色，还没搞清楚怎么给画的圈上色来实现理好看的效果，相信对于熟悉R语言的同学是小菜一碟。<code>, cell=0, cstar=0</code>是不显示圈和边线，只显示散点。 不加这两个参数，只用上面的代码，图如下：</p>
<figure>
<div class="image-block"><img class="" src="https://ask.qcloudimg.com/http-save/yehe-1075469/j1jqq1y6uv.png?imageView2/2/w/1620" alt="" /></div>
</figure>
<p>加上两个参数的图片，就和教程里的最后面的两张图一样：</p>
<figure>
<div class="image-block"><img class="" src="https://ask.qcloudimg.com/http-save/yehe-1075469/g6fnw7cmwb.png?imageView2/2/w/1620" alt="" /></div>
</figure>
]]></content:encoded>
			<wfw:commentRss>http://www.biofacebook.com/?feed=rss2&#038;p=1521</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Making a pairwise distance matrix in pandas</title>
		<link>http://www.biofacebook.com/?p=1517</link>
		<comments>http://www.biofacebook.com/?p=1517#comments</comments>
		<pubDate>Tue, 09 Jun 2020 03:31:54 +0000</pubDate>
		<dc:creator><![CDATA[szypanther]]></dc:creator>
				<category><![CDATA[统计学习]]></category>

		<guid isPermaLink="false">http://www.biofacebook.com/?p=1517</guid>
		<description><![CDATA[<p>This is a somewhat specialized problem that forms part of a lot of data science and clustering workflows. It starts with a relatively straightforward question: if we have a bunch of measurements for two different things, how do we come up with a single number that represents the difference between the two things?</p> <p>An example [...]]]></description>
				<content:encoded><![CDATA[<p>This is a somewhat specialized problem that forms part of a lot of data science and clustering workflows. It starts with a relatively straightforward question: if we have a bunch of measurements for two different things, how do we come up with a single number that represents the difference between the two things?</p>
<p>An example will make the question clearer. Let&#8217;s load our olympic medal dataset:</p>
<pre><code class="lang-python hljs"><span class="hljs-keyword">import</span> pandas <span class="hljs-keyword">as</span> pd
pd.options.display.max_rows = <span class="hljs-number">10</span>
pd.options.display.max_columns = <span class="hljs-number">6</span>
data = pd.read_csv(<span class="hljs-string">"https://raw.githubusercontent.com/mojones/binders/master/olympics.csv"</span>, sep=<span class="hljs-string">"\t"</span>)
data</code></pre>
<div data-preserve-html-node="true">
<table class="dataframe" border="1" data-preserve-html-node="true">
<thead data-preserve-html-node="true">
<tr data-preserve-html-node="true">
<th data-preserve-html-node="true"></th>
<th data-preserve-html-node="true">City</th>
<th data-preserve-html-node="true">Year</th>
<th data-preserve-html-node="true">Sport</th>
<th data-preserve-html-node="true">&#8230;</th>
<th data-preserve-html-node="true">Medal</th>
<th data-preserve-html-node="true">Country</th>
<th data-preserve-html-node="true">Int Olympic Committee code</th>
</tr>
</thead>
<tbody data-preserve-html-node="true">
<tr data-preserve-html-node="true">
<th data-preserve-html-node="true">0</th>
<td data-preserve-html-node="true">Athens</td>
<td data-preserve-html-node="true">1896</td>
<td data-preserve-html-node="true">Aquatics</td>
<td data-preserve-html-node="true">&#8230;</td>
<td data-preserve-html-node="true">Gold</td>
<td data-preserve-html-node="true">Hungary</td>
<td data-preserve-html-node="true">HUN</td>
</tr>
<tr data-preserve-html-node="true">
<th data-preserve-html-node="true">1</th>
<td data-preserve-html-node="true">Athens</td>
<td data-preserve-html-node="true">1896</td>
<td data-preserve-html-node="true">Aquatics</td>
<td data-preserve-html-node="true">&#8230;</td>
<td data-preserve-html-node="true">Silver</td>
<td data-preserve-html-node="true">Austria</td>
<td data-preserve-html-node="true">AUT</td>
</tr>
<tr data-preserve-html-node="true">
<th data-preserve-html-node="true">2</th>
<td data-preserve-html-node="true">Athens</td>
<td data-preserve-html-node="true">1896</td>
<td data-preserve-html-node="true">Aquatics</td>
<td data-preserve-html-node="true">&#8230;</td>
<td data-preserve-html-node="true">Bronze</td>
<td data-preserve-html-node="true">Greece</td>
<td data-preserve-html-node="true">GRE</td>
</tr>
<tr data-preserve-html-node="true">
<th data-preserve-html-node="true">3</th>
<td data-preserve-html-node="true">Athens</td>
<td data-preserve-html-node="true">1896</td>
<td data-preserve-html-node="true">Aquatics</td>
<td data-preserve-html-node="true">&#8230;</td>
<td data-preserve-html-node="true">Gold</td>
<td data-preserve-html-node="true">Greece</td>
<td data-preserve-html-node="true">GRE</td>
</tr>
<tr data-preserve-html-node="true">
<th data-preserve-html-node="true">4</th>
<td data-preserve-html-node="true">Athens</td>
<td data-preserve-html-node="true">1896</td>
<td data-preserve-html-node="true">Aquatics</td>
<td data-preserve-html-node="true">&#8230;</td>
<td data-preserve-html-node="true">Silver</td>
<td data-preserve-html-node="true">Greece</td>
<td data-preserve-html-node="true">GRE</td>
</tr>
<tr data-preserve-html-node="true">
<th data-preserve-html-node="true">&#8230;</th>
<td data-preserve-html-node="true">&#8230;</td>
<td data-preserve-html-node="true">&#8230;</td>
<td data-preserve-html-node="true">&#8230;</td>
<td data-preserve-html-node="true">&#8230;</td>
<td data-preserve-html-node="true">&#8230;</td>
<td data-preserve-html-node="true">&#8230;</td>
<td data-preserve-html-node="true">&#8230;</td>
</tr>
<tr data-preserve-html-node="true">
<th data-preserve-html-node="true">29211</th>
<td data-preserve-html-node="true">Beijing</td>
<td data-preserve-html-node="true">2008</td>
<td data-preserve-html-node="true">Wrestling</td>
<td data-preserve-html-node="true">&#8230;</td>
<td data-preserve-html-node="true">Silver</td>
<td data-preserve-html-node="true">Germany</td>
<td data-preserve-html-node="true">GER</td>
</tr>
<tr data-preserve-html-node="true">
<th data-preserve-html-node="true">29212</th>
<td data-preserve-html-node="true">Beijing</td>
<td data-preserve-html-node="true">2008</td>
<td data-preserve-html-node="true">Wrestling</td>
<td data-preserve-html-node="true">&#8230;</td>
<td data-preserve-html-node="true">Bronze</td>
<td data-preserve-html-node="true">Lithuania</td>
<td data-preserve-html-node="true">LTU</td>
</tr>
<tr data-preserve-html-node="true">
<th data-preserve-html-node="true">29213</th>
<td data-preserve-html-node="true">Beijing</td>
<td data-preserve-html-node="true">2008</td>
<td data-preserve-html-node="true">Wrestling</td>
<td data-preserve-html-node="true">&#8230;</td>
<td data-preserve-html-node="true">Bronze</td>
<td data-preserve-html-node="true">Armenia</td>
<td data-preserve-html-node="true">ARM</td>
</tr>
<tr data-preserve-html-node="true">
<th data-preserve-html-node="true">29214</th>
<td data-preserve-html-node="true">Beijing</td>
<td data-preserve-html-node="true">2008</td>
<td data-preserve-html-node="true">Wrestling</td>
<td data-preserve-html-node="true">&#8230;</td>
<td data-preserve-html-node="true">Gold</td>
<td data-preserve-html-node="true">Cuba</td>
<td data-preserve-html-node="true">CUB</td>
</tr>
<tr data-preserve-html-node="true">
<th data-preserve-html-node="true">29215</th>
<td data-preserve-html-node="true">Beijing</td>
<td data-preserve-html-node="true">2008</td>
<td data-preserve-html-node="true">Wrestling</td>
<td data-preserve-html-node="true">&#8230;</td>
<td data-preserve-html-node="true">Silver</td>
<td data-preserve-html-node="true">Russia</td>
<td data-preserve-html-node="true">RUS</td>
</tr>
</tbody>
</table>
<p data-preserve-html-node="true">29216 rows × 12 columns</p>
</div>
<p>and measure, for each different country, the number of medals they&#8217;ve won in each different sport:</p>
<pre><code class="lang-python hljs">summary = data.groupby([<span class="hljs-string">'Country'</span>, <span class="hljs-string">'Sport'</span>]).size().unstack().fillna(<span class="hljs-number">0</span>)
summary</code></pre>
<div data-preserve-html-node="true">
<table class="dataframe" border="1" data-preserve-html-node="true">
<thead data-preserve-html-node="true">
<tr data-preserve-html-node="true">
<th data-preserve-html-node="true">Sport</th>
<th data-preserve-html-node="true">Aquatics</th>
<th data-preserve-html-node="true">Archery</th>
<th data-preserve-html-node="true">Athletics</th>
<th data-preserve-html-node="true">&#8230;</th>
<th data-preserve-html-node="true">Water Motorsports</th>
<th data-preserve-html-node="true">Weightlifting</th>
<th data-preserve-html-node="true">Wrestling</th>
</tr>
<tr data-preserve-html-node="true">
<th data-preserve-html-node="true">Country</th>
<th data-preserve-html-node="true"></th>
<th data-preserve-html-node="true"></th>
<th data-preserve-html-node="true"></th>
<th data-preserve-html-node="true"></th>
<th data-preserve-html-node="true"></th>
<th data-preserve-html-node="true"></th>
<th data-preserve-html-node="true"></th>
</tr>
</thead>
<tbody data-preserve-html-node="true">
<tr data-preserve-html-node="true">
<th data-preserve-html-node="true">Afghanistan</th>
<td data-preserve-html-node="true">0.0</td>
<td data-preserve-html-node="true">0.0</td>
<td data-preserve-html-node="true">0.0</td>
<td data-preserve-html-node="true">&#8230;</td>
<td data-preserve-html-node="true">0.0</td>
<td data-preserve-html-node="true">0.0</td>
<td data-preserve-html-node="true">0.0</td>
</tr>
<tr data-preserve-html-node="true">
<th data-preserve-html-node="true">Algeria</th>
<td data-preserve-html-node="true">0.0</td>
<td data-preserve-html-node="true">0.0</td>
<td data-preserve-html-node="true">6.0</td>
<td data-preserve-html-node="true">&#8230;</td>
<td data-preserve-html-node="true">0.0</td>
<td data-preserve-html-node="true">0.0</td>
<td data-preserve-html-node="true">0.0</td>
</tr>
<tr data-preserve-html-node="true">
<th data-preserve-html-node="true">Argentina</th>
<td data-preserve-html-node="true">3.0</td>
<td data-preserve-html-node="true">0.0</td>
<td data-preserve-html-node="true">5.0</td>
<td data-preserve-html-node="true">&#8230;</td>
<td data-preserve-html-node="true">0.0</td>
<td data-preserve-html-node="true">2.0</td>
<td data-preserve-html-node="true">0.0</td>
</tr>
<tr data-preserve-html-node="true">
<th data-preserve-html-node="true">Armenia</th>
<td data-preserve-html-node="true">0.0</td>
<td data-preserve-html-node="true">0.0</td>
<td data-preserve-html-node="true">0.0</td>
<td data-preserve-html-node="true">&#8230;</td>
<td data-preserve-html-node="true">0.0</td>
<td data-preserve-html-node="true">4.0</td>
<td data-preserve-html-node="true">4.0</td>
</tr>
<tr data-preserve-html-node="true">
<th data-preserve-html-node="true">Australasia</th>
<td data-preserve-html-node="true">11.0</td>
<td data-preserve-html-node="true">0.0</td>
<td data-preserve-html-node="true">1.0</td>
<td data-preserve-html-node="true">&#8230;</td>
<td data-preserve-html-node="true">0.0</td>
<td data-preserve-html-node="true">0.0</td>
<td data-preserve-html-node="true">0.0</td>
</tr>
<tr data-preserve-html-node="true">
<th data-preserve-html-node="true">&#8230;</th>
<td data-preserve-html-node="true">&#8230;</td>
<td data-preserve-html-node="true">&#8230;</td>
<td data-preserve-html-node="true">&#8230;</td>
<td data-preserve-html-node="true">&#8230;</td>
<td data-preserve-html-node="true">&#8230;</td>
<td data-preserve-html-node="true">&#8230;</td>
<td data-preserve-html-node="true">&#8230;</td>
</tr>
<tr data-preserve-html-node="true">
<th data-preserve-html-node="true">Virgin Islands*</th>
<td data-preserve-html-node="true">0.0</td>
<td data-preserve-html-node="true">0.0</td>
<td data-preserve-html-node="true">0.0</td>
<td data-preserve-html-node="true">&#8230;</td>
<td data-preserve-html-node="true">0.0</td>
<td data-preserve-html-node="true">0.0</td>
<td data-preserve-html-node="true">0.0</td>
</tr>
<tr data-preserve-html-node="true">
<th data-preserve-html-node="true">West Germany</th>
<td data-preserve-html-node="true">62.0</td>
<td data-preserve-html-node="true">0.0</td>
<td data-preserve-html-node="true">67.0</td>
<td data-preserve-html-node="true">&#8230;</td>
<td data-preserve-html-node="true">0.0</td>
<td data-preserve-html-node="true">7.0</td>
<td data-preserve-html-node="true">9.0</td>
</tr>
<tr data-preserve-html-node="true">
<th data-preserve-html-node="true">Yugoslavia</th>
<td data-preserve-html-node="true">91.0</td>
<td data-preserve-html-node="true">0.0</td>
<td data-preserve-html-node="true">2.0</td>
<td data-preserve-html-node="true">&#8230;</td>
<td data-preserve-html-node="true">0.0</td>
<td data-preserve-html-node="true">0.0</td>
<td data-preserve-html-node="true">16.0</td>
</tr>
<tr data-preserve-html-node="true">
<th data-preserve-html-node="true">Zambia</th>
<td data-preserve-html-node="true">0.0</td>
<td data-preserve-html-node="true">0.0</td>
<td data-preserve-html-node="true">1.0</td>
<td data-preserve-html-node="true">&#8230;</td>
<td data-preserve-html-node="true">0.0</td>
<td data-preserve-html-node="true">0.0</td>
<td data-preserve-html-node="true">0.0</td>
</tr>
<tr data-preserve-html-node="true">
<th data-preserve-html-node="true">Zimbabwe</th>
<td data-preserve-html-node="true">7.0</td>
<td data-preserve-html-node="true">0.0</td>
<td data-preserve-html-node="true">0.0</td>
<td data-preserve-html-node="true">&#8230;</td>
<td data-preserve-html-node="true">0.0</td>
<td data-preserve-html-node="true">0.0</td>
<td data-preserve-html-node="true">0.0</td>
</tr>
</tbody>
</table>
<p data-preserve-html-node="true">137 rows × 42 columns</p>
</div>
<p>Now we&#8217;ll pick two countries:</p>
<pre><code class="lang-python hljs">summary.loc[[<span class="hljs-string">'Germany'</span>, <span class="hljs-string">'Italy'</span>]]</code></pre>
<div data-preserve-html-node="true">
<table class="dataframe" border="1" data-preserve-html-node="true">
<thead data-preserve-html-node="true">
<tr data-preserve-html-node="true">
<th data-preserve-html-node="true">Sport</th>
<th data-preserve-html-node="true">Aquatics</th>
<th data-preserve-html-node="true">Archery</th>
<th data-preserve-html-node="true">Athletics</th>
<th data-preserve-html-node="true">&#8230;</th>
<th data-preserve-html-node="true">Water Motorsports</th>
<th data-preserve-html-node="true">Weightlifting</th>
<th data-preserve-html-node="true">Wrestling</th>
</tr>
<tr data-preserve-html-node="true">
<th data-preserve-html-node="true">Country</th>
<th data-preserve-html-node="true"></th>
<th data-preserve-html-node="true"></th>
<th data-preserve-html-node="true"></th>
<th data-preserve-html-node="true"></th>
<th data-preserve-html-node="true"></th>
<th data-preserve-html-node="true"></th>
<th data-preserve-html-node="true"></th>
</tr>
</thead>
<tbody data-preserve-html-node="true">
<tr data-preserve-html-node="true">
<th data-preserve-html-node="true">Germany</th>
<td data-preserve-html-node="true">175.0</td>
<td data-preserve-html-node="true">6.0</td>
<td data-preserve-html-node="true">99.0</td>
<td data-preserve-html-node="true">&#8230;</td>
<td data-preserve-html-node="true">0.0</td>
<td data-preserve-html-node="true">20.0</td>
<td data-preserve-html-node="true">24.0</td>
</tr>
<tr data-preserve-html-node="true">
<th data-preserve-html-node="true">Italy</th>
<td data-preserve-html-node="true">113.0</td>
<td data-preserve-html-node="true">12.0</td>
<td data-preserve-html-node="true">71.0</td>
<td data-preserve-html-node="true">&#8230;</td>
<td data-preserve-html-node="true">0.0</td>
<td data-preserve-html-node="true">14.0</td>
<td data-preserve-html-node="true">20.0</td>
</tr>
</tbody>
</table>
<p data-preserve-html-node="true">2 rows × 42 columns</p>
</div>
<p>Each country has 42 columns giving the total number of medals won in each sport. Our job is to come up with a single number that summarizes how different those two lists of numbers are. Mathematicians have figured out lots of different ways of doing that, many of which are implemented in the <code>scipy.spatial.distance</code> module.</p>
<p>If we just import <code>pdist</code> from the module, and pass in our dataframe of two countries, we&#8217;ll get a measuremnt:</p>
<pre><code class="lang-python hljs"><span class="hljs-keyword">from</span> scipy.spatial.distance <span class="hljs-keyword">import</span> pdist
pdist(summary.loc[[<span class="hljs-string">'Germany'</span>, <span class="hljs-string">'Italy'</span>]])</code></pre>
<pre><code>array([342.3024978])</code></pre>
<p>That&#8217;s the distance score using the default metric, which is called the <em>euclidian distance</em>. Think of it as the straight line distance between the two points in space defined by the two lists of 42 numbers.</p>
<p>Now, what happens if we pass in a dataframe with three countries?</p>
<pre><code class="lang-python hljs">pdist(summary.loc[[<span class="hljs-string">'Germany'</span>, <span class="hljs-string">'Italy'</span>, <span class="hljs-string">'France'</span>]])</code></pre>
<pre><code>array([342.3024978 , 317.98584874, 144.82403116])</code></pre>
<p>As we might expect, we have three measurements:</p>
<ul>
<li>Germany and Italy</li>
<li>Germnay and France</li>
<li>Italy and France</li>
</ul>
<p>But it&#8217;s not easy to figure out which belongs to which. Happily, <code>scipy</code> also has a helper function that will take this list of numbers and turn it back into a square matrix:</p>
<pre><code class="lang-python hljs"><span class="hljs-keyword">from</span> scipy.spatial.distance <span class="hljs-keyword">import</span> squareform

squareform(pdist(summary.loc[[<span class="hljs-string">'Germany'</span>, <span class="hljs-string">'Italy'</span>, <span class="hljs-string">'France'</span>]]))</code></pre>
<pre><code>array([[  0.        , 342.3024978 , 317.98584874],
       [342.3024978 ,   0.        , 144.82403116],
       [317.98584874, 144.82403116,   0.        ]])</code></pre>
<p>In order to make sense of this, we need to re-attach the country names, which we can just do by turning it into a DataFrame:</p>
<pre><code class="lang-python hljs">pd.DataFrame(
    squareform(pdist(summary.loc[[<span class="hljs-string">'Germany'</span>, <span class="hljs-string">'Italy'</span>, <span class="hljs-string">'France'</span>]])),
    columns = [<span class="hljs-string">'Germany'</span>, <span class="hljs-string">'Italy'</span>, <span class="hljs-string">'France'</span>],
    index = [<span class="hljs-string">'Germany'</span>, <span class="hljs-string">'Italy'</span>, <span class="hljs-string">'France'</span>]
)</code></pre>
<div data-preserve-html-node="true">
<table class="dataframe" border="1" data-preserve-html-node="true">
<thead data-preserve-html-node="true">
<tr data-preserve-html-node="true">
<th data-preserve-html-node="true"></th>
<th data-preserve-html-node="true">Germany</th>
<th data-preserve-html-node="true">Italy</th>
<th data-preserve-html-node="true">France</th>
</tr>
</thead>
<tbody data-preserve-html-node="true">
<tr data-preserve-html-node="true">
<th data-preserve-html-node="true">Germany</th>
<td data-preserve-html-node="true">0.000000</td>
<td data-preserve-html-node="true">342.302498</td>
<td data-preserve-html-node="true">317.985849</td>
</tr>
<tr data-preserve-html-node="true">
<th data-preserve-html-node="true">Italy</th>
<td data-preserve-html-node="true">342.302498</td>
<td data-preserve-html-node="true">0.000000</td>
<td data-preserve-html-node="true">144.824031</td>
</tr>
<tr data-preserve-html-node="true">
<th data-preserve-html-node="true">France</th>
<td data-preserve-html-node="true">317.985849</td>
<td data-preserve-html-node="true">144.824031</td>
<td data-preserve-html-node="true">0.000000</td>
</tr>
</tbody>
</table>
</div>
<p>Hopefully this agrees with our intuition; the numbers on the diagonal are all zero, because each country is identical to itself, and the numbers above and below are mirror images, because the distance between Germany and France is the same as the distance between France and Germany (remember that we are talking about distance in terms of their medal totals, not geographical distance!)</p>
<p>Finally, to get pairwise measurements for the whole input dataframe, we just pass in the complete object and get the country names from the index:</p>
<pre><code class="lang-python hljs">pairwise = pd.DataFrame(
    squareform(pdist(summary)),
    columns = summary.index,
    index = summary.index
)

pairwise</code></pre>
<div data-preserve-html-node="true">
<table class="dataframe" border="1" data-preserve-html-node="true">
<thead data-preserve-html-node="true">
<tr data-preserve-html-node="true">
<th data-preserve-html-node="true">Country</th>
<th data-preserve-html-node="true">Afghanistan</th>
<th data-preserve-html-node="true">Algeria</th>
<th data-preserve-html-node="true">Argentina</th>
<th data-preserve-html-node="true">&#8230;</th>
<th data-preserve-html-node="true">Yugoslavia</th>
<th data-preserve-html-node="true">Zambia</th>
<th data-preserve-html-node="true">Zimbabwe</th>
</tr>
<tr data-preserve-html-node="true">
<th data-preserve-html-node="true">Country</th>
<th data-preserve-html-node="true"></th>
<th data-preserve-html-node="true"></th>
<th data-preserve-html-node="true"></th>
<th data-preserve-html-node="true"></th>
<th data-preserve-html-node="true"></th>
<th data-preserve-html-node="true"></th>
<th data-preserve-html-node="true"></th>
</tr>
</thead>
<tbody data-preserve-html-node="true">
<tr data-preserve-html-node="true">
<th data-preserve-html-node="true">Afghanistan</th>
<td data-preserve-html-node="true">0.000000</td>
<td data-preserve-html-node="true">8.774964</td>
<td data-preserve-html-node="true">96.643675</td>
<td data-preserve-html-node="true">&#8230;</td>
<td data-preserve-html-node="true">171.947666</td>
<td data-preserve-html-node="true">1.732051</td>
<td data-preserve-html-node="true">17.492856</td>
</tr>
<tr data-preserve-html-node="true">
<th data-preserve-html-node="true">Algeria</th>
<td data-preserve-html-node="true">8.774964</td>
<td data-preserve-html-node="true">0.000000</td>
<td data-preserve-html-node="true">95.199790</td>
<td data-preserve-html-node="true">&#8230;</td>
<td data-preserve-html-node="true">171.688672</td>
<td data-preserve-html-node="true">7.348469</td>
<td data-preserve-html-node="true">19.519221</td>
</tr>
<tr data-preserve-html-node="true">
<th data-preserve-html-node="true">Argentina</th>
<td data-preserve-html-node="true">96.643675</td>
<td data-preserve-html-node="true">95.199790</td>
<td data-preserve-html-node="true">0.000000</td>
<td data-preserve-html-node="true">&#8230;</td>
<td data-preserve-html-node="true">148.128323</td>
<td data-preserve-html-node="true">96.348326</td>
<td data-preserve-html-node="true">89.810912</td>
</tr>
<tr data-preserve-html-node="true">
<th data-preserve-html-node="true">Armenia</th>
<td data-preserve-html-node="true">5.830952</td>
<td data-preserve-html-node="true">9.848858</td>
<td data-preserve-html-node="true">96.477977</td>
<td data-preserve-html-node="true">&#8230;</td>
<td data-preserve-html-node="true">171.604196</td>
<td data-preserve-html-node="true">5.744563</td>
<td data-preserve-html-node="true">18.384776</td>
</tr>
<tr data-preserve-html-node="true">
<th data-preserve-html-node="true">Australasia</th>
<td data-preserve-html-node="true">18.708287</td>
<td data-preserve-html-node="true">20.024984</td>
<td data-preserve-html-node="true">97.744565</td>
<td data-preserve-html-node="true">&#8230;</td>
<td data-preserve-html-node="true">166.991018</td>
<td data-preserve-html-node="true">18.627936</td>
<td data-preserve-html-node="true">22.360680</td>
</tr>
<tr data-preserve-html-node="true">
<th data-preserve-html-node="true">&#8230;</th>
<td data-preserve-html-node="true">&#8230;</td>
<td data-preserve-html-node="true">&#8230;</td>
<td data-preserve-html-node="true">&#8230;</td>
<td data-preserve-html-node="true">&#8230;</td>
<td data-preserve-html-node="true">&#8230;</td>
<td data-preserve-html-node="true">&#8230;</td>
<td data-preserve-html-node="true">&#8230;</td>
</tr>
<tr data-preserve-html-node="true">
<th data-preserve-html-node="true">Virgin Islands*</th>
<td data-preserve-html-node="true">1.414214</td>
<td data-preserve-html-node="true">8.774964</td>
<td data-preserve-html-node="true">96.457244</td>
<td data-preserve-html-node="true">&#8230;</td>
<td data-preserve-html-node="true">171.947666</td>
<td data-preserve-html-node="true">1.732051</td>
<td data-preserve-html-node="true">17.492856</td>
</tr>
<tr data-preserve-html-node="true">
<th data-preserve-html-node="true">West Germany</th>
<td data-preserve-html-node="true">153.052279</td>
<td data-preserve-html-node="true">150.306354</td>
<td data-preserve-html-node="true">142.537714</td>
<td data-preserve-html-node="true">&#8230;</td>
<td data-preserve-html-node="true">184.945938</td>
<td data-preserve-html-node="true">152.577849</td>
<td data-preserve-html-node="true">144.045132</td>
</tr>
<tr data-preserve-html-node="true">
<th data-preserve-html-node="true">Yugoslavia</th>
<td data-preserve-html-node="true">171.947666</td>
<td data-preserve-html-node="true">171.688672</td>
<td data-preserve-html-node="true">148.128323</td>
<td data-preserve-html-node="true">&#8230;</td>
<td data-preserve-html-node="true">0.000000</td>
<td data-preserve-html-node="true">171.874955</td>
<td data-preserve-html-node="true">169.103519</td>
</tr>
<tr data-preserve-html-node="true">
<th data-preserve-html-node="true">Zambia</th>
<td data-preserve-html-node="true">1.732051</td>
<td data-preserve-html-node="true">7.348469</td>
<td data-preserve-html-node="true">96.348326</td>
<td data-preserve-html-node="true">&#8230;</td>
<td data-preserve-html-node="true">171.874955</td>
<td data-preserve-html-node="true">0.000000</td>
<td data-preserve-html-node="true">17.521415</td>
</tr>
<tr data-preserve-html-node="true">
<th data-preserve-html-node="true">Zimbabwe</th>
<td data-preserve-html-node="true">17.492856</td>
<td data-preserve-html-node="true">19.519221</td>
<td data-preserve-html-node="true">89.810912</td>
<td data-preserve-html-node="true">&#8230;</td>
<td data-preserve-html-node="true">169.103519</td>
<td data-preserve-html-node="true">17.521415</td>
<td data-preserve-html-node="true">0.000000</td>
</tr>
</tbody>
</table>
<p data-preserve-html-node="true">137 rows × 137 columns</p>
</div>
<p>A nice way to visualize these is with a heatmap. 137 countries is a bit too much to show on a webpage, so let&#8217;s restrict it to just the countries that have scored at least 500 medals total:</p>
<pre><code class="lang-python hljs"><span class="hljs-keyword">import</span> seaborn <span class="hljs-keyword">as</span> sns
<span class="hljs-keyword">import</span> matplotlib.pyplot <span class="hljs-keyword">as</span> plt

<span class="hljs-comment"># make summary table for just top countries</span>
top_countries = (
    data
    .groupby(<span class="hljs-string">'Country'</span>)
    .filter(<span class="hljs-keyword">lambda</span> x : len(x) &gt; <span class="hljs-number">500</span>)
    .groupby([<span class="hljs-string">'Country'</span>, <span class="hljs-string">'Sport'</span>])
    .size()
    .unstack()
    .fillna(<span class="hljs-number">0</span>)
    )

<span class="hljs-comment"># make pairwise distance matrix</span>
pairwise_top = pd.DataFrame(
    squareform(pdist(top_countries)),
    columns = top_countries.index,
    index = top_countries.index
)

<span class="hljs-comment"># plot it with seaborn</span>
plt.figure(figsize=(<span class="hljs-number">10</span>,<span class="hljs-number">10</span>))
sns.heatmap(
    pairwise_top,
    cmap=<span class="hljs-string">'OrRd'</span>,
    linewidth=<span class="hljs-number">1</span>
)</code></pre>
<p><img src="https://github.com/mojones/datavis_resources/raw/master/pairwise%20distance/output_18_1.png" alt="png" /></p>
<p>Now that we have a plot to look at, we can see a problem with the distance metric we&#8217;re using. The US has won so many more medals than other countries that it distorts the measurement. And if we think about it, what we&#8217;re really interested in is not the exact number of medals in each category, but the relative number. In other words, we want two contries to be considered similar if they both have about twice as many medals in boxing as athletics, for example, regardless of the exact numbers.</p>
<p>Luckily for us, there is a distance measure already implemented in scipy that has that property &#8211; it&#8217;s called <em>cosine distance</em>. Think of it as a measurement that only looks at the relationships between the 44 numbers for each country, not their magnitude. We can switch to cosine distance by specifying the <code>metric</code> keyword argument in <code>pdist</code>:</p>
<pre><code class="lang-python hljs"><span class="hljs-comment"># make pairwise distance matrix</span>
pairwise_top = pd.DataFrame(
    squareform(pdist(top_countries, metric=<span class="hljs-string">'cosine'</span>)),
    columns = top_countries.index,
    index = top_countries.index
)

<span class="hljs-comment"># plot it with seaborn</span>
plt.figure(figsize=(<span class="hljs-number">10</span>,<span class="hljs-number">10</span>))
sns.heatmap(
    pairwise_top,
    cmap=<span class="hljs-string">'OrRd'</span>,
    linewidth=<span class="hljs-number">1</span>
)</code></pre>
<p><img src="https://github.com/mojones/datavis_resources/raw/master/pairwise%20distance/output_20_1.png" alt="png" /></p>
<p>And as you can see we spot some much more interstesting patterns. Notice, for example, that Russia and Soviet Union have a very low distance (i.e. their medal distributions are very similar).</p>
<p>When looking at data like this, remember that the shade of each cell is not telling us anything about how many medals a country has won &#8211; simply how different or similar each country is to each other. Compare the above heatmap with this one which displays the proportion of medals in each sport per country:</p>
<pre><code class="lang-python hljs">plt.figure(figsize=(<span class="hljs-number">10</span>,<span class="hljs-number">10</span>))
sns.heatmap(
    top_countries.apply(<span class="hljs-keyword">lambda</span> x : x / x.sum(), axis=<span class="hljs-number">1</span>),
    cmap=<span class="hljs-string">'BuPu'</span>,
    square=<span class="hljs-keyword">True</span>,
    cbar_kws = {<span class="hljs-string">'fraction'</span> : <span class="hljs-number">0.02</span>}
)</code></pre>
<p><img src="https://github.com/mojones/datavis_resources/raw/master/pairwise%20distance/output_22_1.png" alt="png" /></p>
<p>Finally, how might we find pairs of countries that have very similar medal distributions (i.e. very low numbers in the pairwise table)? By far the easiest way is to start of by reshaping the table into long form, so that each comparison is on a separate row:</p>
<pre><code class="lang-python hljs"><span class="hljs-comment"># create our pairwise distance matrix</span>
pairwise = pd.DataFrame(
    squareform(pdist(summary, metric=<span class="hljs-string">'cosine'</span>)),
    columns = summary.index,
    index = summary.index
)

<span class="hljs-comment"># move to long form</span>
long_form = pairwise.unstack()

<span class="hljs-comment"># rename columns and turn into a dataframe</span>
long_form.index.rename([<span class="hljs-string">'Country A'</span>, <span class="hljs-string">'Country B'</span>], inplace=<span class="hljs-keyword">True</span>)
long_form = long_form.to_frame(<span class="hljs-string">'cosine distance'</span>).reset_index()</code></pre>
<p>Now we can write our filter as normal, remembering to filter out the unintersting rows that tell us a country&#8217;s distance from itself!</p>
<pre><code class="lang-python hljs">long_form[
    (long_form[<span class="hljs-string">'cosine distance'</span>] &lt; <span class="hljs-number">0.05</span>) 
    &amp; (long_form[<span class="hljs-string">'Country A'</span>] != long_form[<span class="hljs-string">'Country B'</span>])
]</code></pre>
<div data-preserve-html-node="true">
<table class="dataframe" border="1" data-preserve-html-node="true">
<thead data-preserve-html-node="true">
<tr data-preserve-html-node="true">
<th data-preserve-html-node="true"></th>
<th data-preserve-html-node="true">Country A</th>
<th data-preserve-html-node="true">Country B</th>
<th data-preserve-html-node="true">cosine distance</th>
</tr>
</thead>
<tbody data-preserve-html-node="true">
<tr data-preserve-html-node="true">
<th data-preserve-html-node="true">272</th>
<td data-preserve-html-node="true">Algeria</td>
<td data-preserve-html-node="true">Zambia</td>
<td data-preserve-html-node="true">0.026671</td>
</tr>
<tr data-preserve-html-node="true">
<th data-preserve-html-node="true">1034</th>
<td data-preserve-html-node="true">Azerbaijan</td>
<td data-preserve-html-node="true">Mongolia</td>
<td data-preserve-html-node="true">0.045618</td>
</tr>
<tr data-preserve-html-node="true">
<th data-preserve-html-node="true">1105</th>
<td data-preserve-html-node="true">Bahamas</td>
<td data-preserve-html-node="true">Barbados</td>
<td data-preserve-html-node="true">0.021450</td>
</tr>
<tr data-preserve-html-node="true">
<th data-preserve-html-node="true">1111</th>
<td data-preserve-html-node="true">Bahamas</td>
<td data-preserve-html-node="true">British West Indies</td>
<td data-preserve-html-node="true">0.021450</td>
</tr>
<tr data-preserve-html-node="true">
<th data-preserve-html-node="true">1113</th>
<td data-preserve-html-node="true">Bahamas</td>
<td data-preserve-html-node="true">Burundi</td>
<td data-preserve-html-node="true">0.021450</td>
</tr>
<tr data-preserve-html-node="true">
<th data-preserve-html-node="true">&#8230;</th>
<td data-preserve-html-node="true">&#8230;</td>
<td data-preserve-html-node="true">&#8230;</td>
<td data-preserve-html-node="true">&#8230;</td>
</tr>
<tr data-preserve-html-node="true">
<th data-preserve-html-node="true">17033</th>
<td data-preserve-html-node="true">United Arab Emirates</td>
<td data-preserve-html-node="true">Haiti</td>
<td data-preserve-html-node="true">0.010051</td>
</tr>
<tr data-preserve-html-node="true">
<th data-preserve-html-node="true">17037</th>
<td data-preserve-html-node="true">United Arab Emirates</td>
<td data-preserve-html-node="true">Independent Olympic Participants</td>
<td data-preserve-html-node="true">0.000000</td>
</tr>
<tr data-preserve-html-node="true">
<th data-preserve-html-node="true">17051</th>
<td data-preserve-html-node="true">United Arab Emirates</td>
<td data-preserve-html-node="true">Kuwait</td>
<td data-preserve-html-node="true">0.000000</td>
</tr>
<tr data-preserve-html-node="true">
<th data-preserve-html-node="true">18164</th>
<td data-preserve-html-node="true">Virgin Islands</td>
<td data-preserve-html-node="true">Netherlands Antilles</td>
<td data-preserve-html-node="true">0.000000</td>
</tr>
<tr data-preserve-html-node="true">
<th data-preserve-html-node="true">18496</th>
<td data-preserve-html-node="true">Zambia</td>
<td data-preserve-html-node="true">Algeria</td>
<td data-preserve-html-node="true">0.026671</td>
</tr>
</tbody>
</table>
<p data-preserve-html-node="true">462 rows × 3 columns</p>
</div>
<p>https://www.drawingfromdata.com/making-a-pairwise-distance-matrix-with-pandas  dm = squareform(pdist(input_data,metric=&#8217;braycurtis&#8217;))</p>
]]></content:encoded>
			<wfw:commentRss>http://www.biofacebook.com/?feed=rss2&#038;p=1517</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Pandas and Sklearn</title>
		<link>http://www.biofacebook.com/?p=1512</link>
		<comments>http://www.biofacebook.com/?p=1512#comments</comments>
		<pubDate>Fri, 05 Jun 2020 03:02:14 +0000</pubDate>
		<dc:creator><![CDATA[szypanther]]></dc:creator>
				<category><![CDATA[生物信息]]></category>
		<category><![CDATA[统计学习]]></category>

		<guid isPermaLink="false">http://www.biofacebook.com/?p=1512</guid>
		<description><![CDATA[pandas isnull函数检查数据是否有缺失 pandas isnull sum with column headers <p>&#160;</p> for col in main_df: print(sum(pd.isnull(data[col]))) <p>I get a list of the null count for each column:</p> 0 1 100 <p>What I&#8217;m trying to do is create a new dataframe which has the column header alongside the null count, e.g.</p> col1 &#124; 0 col2 &#124; 1 col3 [...]]]></description>
				<content:encoded><![CDATA[<h1 class="title-article">pandas isnull函数检查数据是否有缺失</h1>
<h1 class="grid--cell fs-headline1 fl1 ow-break-word mb8"><a class="question-hyperlink" href="https://stackoverflow.com/questions/41681693/pandas-isnull-sum-with-column-headers">pandas isnull sum with column headers</a></h1>
<p>&nbsp;</p>
<pre><code>for col in main_df:
    print(sum(pd.isnull(data[col])))
</code></pre>
<p>I get a list of the null count for each column:</p>
<pre><code>0
1
100
</code></pre>
<p>What I&#8217;m trying to do is create a new dataframe which has the column header alongside the null count, e.g.</p>
<pre><code>col1 | 0
col2 | 1
col3 | 100</code></pre>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>#print every column using:</p>
<p>&nbsp;</p>
<pre><code>nulls = df.isnull().sum().to_frame()
for index, row in nulls.iterrows():
    print(index, row[0])
</code></pre>
<p>&nbsp;</p>
<pre><code>for col in df:
    print(df[col].unique())



</code></pre>
<h1 class="article-title J-articleTitle">pandas.get_dummies 的用法 (One-Hot Encoding)</h1>
<p>&nbsp;</p>
<p>get_dummies 是利用pandas实现one hot encode的方式。详细参数请查看<a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.get_dummies.html" target="_blank" rel="nofollow noopener noreferrer" data-from="10680">官方文档</a></p>
<blockquote><p>pandas.get_dummies(data, prefix=None, prefix_sep=’_’, dummy_na=False, columns=None, sparse=False, drop_first=False)[source]</p></blockquote>
<p>参数说明：</p>
<ul class="ul-level-0">
<li>data : array-like, Series, or DataFrame 输入的数据</li>
<li>prefix : string, list of strings, or dict of strings, default None get_dummies转换后，列名的前缀</li>
<li>columns : list-like, default None 指定需要实现类别转换的列名</li>
<li>dummy_na : bool, default False 增加一列表示空缺值，如果False就忽略空缺值</li>
<li>drop_first : bool, default False 获得k中的k-1个类别值，去除第一个</li>
</ul>
<p><strong>离散特征的编码分为两种情况：</strong></p>
<p>1、离散特征的取值之间没有大小的意义，比如color：[red,blue],那么就使用one-hot编码</p>
<p>2、离散特征的取值有大小的意义，比如size:[X,XL,XXL],那么就使用数值的映射{X:1,XL:2,XXL:3}</p>
<p>例子：</p>
<pre class="prism-token token  language-javascript"><span class="token keyword">import</span> pandas <span class="token keyword">as</span> pd

df <span class="token operator">=</span> pd<span class="token punctuation">.</span><span class="token function">DataFrame</span><span class="token punctuation">(</span><span class="token punctuation">[</span>  
            <span class="token punctuation">[</span><span class="token string">'green'</span> <span class="token punctuation">,</span> <span class="token string">'A'</span><span class="token punctuation">]</span><span class="token punctuation">,</span>   
            <span class="token punctuation">[</span><span class="token string">'red'</span>   <span class="token punctuation">,</span> <span class="token string">'B'</span><span class="token punctuation">]</span><span class="token punctuation">,</span>   
            <span class="token punctuation">[</span><span class="token string">'blue'</span>  <span class="token punctuation">,</span> <span class="token string">'A'</span><span class="token punctuation">]</span><span class="token punctuation">]</span><span class="token punctuation">)</span>  

df<span class="token punctuation">.</span>columns <span class="token operator">=</span> <span class="token punctuation">[</span><span class="token string">'color'</span><span class="token punctuation">,</span>  <span class="token string">'class'</span><span class="token punctuation">]</span> 
pd<span class="token punctuation">.</span><span class="token function">get_dummies</span><span class="token punctuation">(</span>df<span class="token punctuation">)</span></pre>
<p>get_dummies 前：</p>
<figure>
<div class="image-block"><img class="" src="https://ask.qcloudimg.com/http-save/yehe-4908043/zzw0nx7acu.png?imageView2/2/w/1620" alt="" /></div>
</figure>
<p>get_dummies 后：</p>
<figure>
<div class="image-block"><img class="" src="https://ask.qcloudimg.com/http-save/yehe-4908043/91815t6ls4.png?imageView2/2/w/1620" alt="" /></div>
</figure>
<p>上述执行完以后再打印df 出来的还是get_dummies 前的图，因为你没有写</p>
<pre class="prism-token token  language-javascript">df <span class="token operator">=</span> pd<span class="token punctuation">.</span><span class="token function">get_dummies</span><span class="token punctuation">(</span>df<span class="token punctuation">)</span></pre>
<p>可以对指定列进行get_dummies</p>
<pre class="prism-token token  language-javascript">pd<span class="token punctuation">.</span><span class="token function">get_dummies</span><span class="token punctuation">(</span>df<span class="token punctuation">.</span>color<span class="token punctuation">)</span></pre>
<figure>
<div class="image-block"><img class="" src="https://ask.qcloudimg.com/http-save/yehe-4908043/qoplyil41o.png?imageView2/2/w/1620" alt="" /></div>
</figure>
<p>将指定列进行get_dummies 后合并到元数据中</p>
<pre class="prism-token token  language-javascript">df <span class="token operator">=</span> df<span class="token punctuation">.</span><span class="token function">join</span><span class="token punctuation">(</span>pd<span class="token punctuation">.</span><span class="token function">get_dummies</span><span class="token punctuation">(</span>df<span class="token punctuation">.</span>color<span class="token punctuation">)</span><span class="token punctuation">)</span></pre>
<figure>
<div class="image-block"><img class="" src="https://ask.qcloudimg.com/http-save/yehe-4908043/m0sb5qrqaz.png?imageView2/2/w/1620" alt="" /></div>
</figure>
<p>参考：<a href="https://blog.csdn.net/maymay_/article/details/80198468" target="_blank" rel="nofollow noopener noreferrer" data-from="10680">https://blog.csdn.net/maymay_/article/details/80198468</a></p>
<pre> 
&gt;&gt;&gt; train_filter.info()
&lt;class 'pandas.core.frame.DataFrame'&gt;
RangeIndex: 1482 entries, 0 to 1481
Columns: 182 entries, SampleID to BS120
dtypes: float64(177), int64(2), object(3)
memory usage: 2.1+ MB
&gt;&gt;&gt; train_filter.dtypes
SampleID object
Streptococcus Infection float64
Duration_of_gestation object
Gestation_age float64
Gestation_age_G1 float64
Gestation_age_G2 float64
GDM_HDP float64
Age int64
Age_group int64
Blood_type float64
Medication_use float64
Progesterone_use float64
Pregnancy_mode float64
Native_place float64
Combined_disease float64
Infection float64
Scar_uterus float64
Risk_rating float64
Anamnesis float64
Thalassemia float64
Ovary_disease float64
Hepatopathy float64
Allergic_history float64
Thyroid_disease float64
Hysteromyoma float64
Breast_disease float64
Weight_at_delivery object
Weight_before_pregnancy float64
Height float64
BMI_before_pregnancy float64
 ... 
B_A/G float64
B_r_GT_G float64
B_r_GT float64
B_TBA_G float64
B_TBA float64
B_ALT_G float64
B_ALT float64
B_AST_G float64
B_AST float64
B_TBIL_G float64
B_TBIL float64
B_DBIL_G float64
B_DBIL float64
B_IBIL_G float64
B_IBIL float64
B_Crea_G float64
B_Crea float64
B_CysC_G float64
B_CysC float64
B_UA_G float64
B_UA float64
B_Urea_G float64
B_Urea float64
B_GLU_G float64
B_GLU float64
HbA1c_G float64
HbA1c float64
BS float64
BS60 float64
BS120 float64
Length: 182, dtype: object



</pre>
<p>scikit-learn 是基于 Python 语言的机器学习工具</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>http://www.scikitlearn.com.cn/</p>
]]></content:encoded>
			<wfw:commentRss>http://www.biofacebook.com/?feed=rss2&#038;p=1512</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>conda 报错　Solving environment: failed</title>
		<link>http://www.biofacebook.com/?p=1509</link>
		<comments>http://www.biofacebook.com/?p=1509#comments</comments>
		<pubDate>Fri, 22 May 2020 03:07:44 +0000</pubDate>
		<dc:creator><![CDATA[szypanther]]></dc:creator>
				<category><![CDATA[Linux相关]]></category>

		<guid isPermaLink="false">http://www.biofacebook.com/?p=1509</guid>
		<description><![CDATA[<p>现在说说我的解决思路： 1.根据错误内容，安装失败的原因应该是这个网址 https://repo.anaconda.com/pkgs/main/noarch/repodata.json.bz2 请求失败。 2.所以我尝试用</p> <p>wget https://repo.anaconda.com/pkgs/main/noarch/repodata.json.bz2 1 手动下载这个包，结果出现以下错误。</p> <p>-2018-12-12 18:29:18&#8211; https://repo.anaconda.com/pkgs/main/noarch/repodata.json.bz2 Connecting to 127.0.0.1:33473&#8230; failed: Connection refused.</p> <p>1 2 3 那么应该寻找失败的原因，127.0.0.1表示的是本机，应该不会有什么问题，那么会不会是因为端口33473被占用的原因。 3.用netstat -ntpl查看本地端口的使用情况</p> <p>netstat -ntpl (Not all processes could be identified, non-owned process info will not be shown, you would have to be root to see it all.) Active Internet connections (only servers) Proto [...]]]></description>
				<content:encoded><![CDATA[<p>现在说说我的解决思路：<br />
1.根据错误内容，安装失败的原因应该是这个网址 https://repo.anaconda.com/pkgs/main/noarch/repodata.json.bz2 请求失败。<br />
2.所以我尝试用</p>
<p>wget https://repo.anaconda.com/pkgs/main/noarch/repodata.json.bz2<br />
1<br />
手动下载这个包，结果出现以下错误。</p>
<p>-2018-12-12 18:29:18&#8211; https://repo.anaconda.com/pkgs/main/noarch/repodata.json.bz2<br />
Connecting to 127.0.0.1:33473&#8230; failed: Connection refused.</p>
<p>1<br />
2<br />
3<br />
那么应该寻找失败的原因，127.0.0.1表示的是本机，应该不会有什么问题，那么会不会是因为端口33473被占用的原因。<br />
3.用netstat -ntpl查看本地端口的使用情况</p>
<p>netstat -ntpl<br />
(Not all processes could be identified, non-owned process info<br />
will not be shown, you would have to be root to see it all.)<br />
Active Internet connections (only servers)<br />
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name<br />
tcp 0 0 127.0.0.1:57837 0.0.0.0:* LISTEN 2165/python<br />
tcp 0 0 127.0.0.1:37138 0.0.0.0:* LISTEN 15851/python<br />
tcp 0 0 127.0.0.1:45331 0.0.0.0:* LISTEN 2050/python<br />
tcp 0 0 127.0.0.1:37492 0.0.0.0:* LISTEN 2165/python<br />
tcp 0 0 127.0.1.1:53 0.0.0.0:* LISTEN &#8211;<br />
tcp 0 0 127.0.0.1:53526 0.0.0.0:* LISTEN 15851/python<br />
tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN &#8211;<br />
tcp 0 0 127.0.0.1:45527 0.0.0.0:* LISTEN 2165/python<br />
tcp 0 0 127.0.0.1:45655 0.0.0.0:* LISTEN 2165/python<br />
tcp 0 0 127.0.0.1:36887 0.0.0.0:* LISTEN 2050/python<br />
tcp 0 0 127.0.0.1:8888 0.0.0.0:* LISTEN 30648/python<br />
tcp 0 0 127.0.0.1:5432 0.0.0.0:* LISTEN &#8211;<br />
tcp 0 0 127.0.0.1:43001 0.0.0.0:* LISTEN 15851/python<br />
tcp 0 0 127.0.0.1:59033 0.0.0.0:* LISTEN 2050/python<br />
tcp 0 0 127.0.0.1:34684 0.0.0.0:* LISTEN 2165/python<br />
tcp 0 0 127.0.0.1:43869 0.0.0.0:* LISTEN 2165/python<br />
tcp 0 0 127.0.0.1:45631 0.0.0.0:* LISTEN 2050/python<br />
tcp 0 0 127.0.0.1:58368 0.0.0.0:* LISTEN 15851/python<br />
tcp 0 0 127.0.0.1:34498 0.0.0.0:* LISTEN 2050/python<br />
tcp 0 0 127.0.0.1:42147 0.0.0.0:* LISTEN 15851/python<br />
tcp 0 0 127.0.0.1:34211 0.0.0.0:* LISTEN 4448/pgAdmin4<br />
tcp 0 0 127.0.0.1:51367 0.0.0.0:* LISTEN 2050/python<br />
tcp 0 0 127.0.0.1:51338 0.0.0.0:* LISTEN 15851/python<br />
tcp6 0 0 127.0.0.1:8079 :::* LISTEN 28374/java<br />
tcp6 0 0 :::8080 :::* LISTEN 28374/java<br />
tcp6 0 0 ::1:631 :::* LISTEN &#8211;<br />
1<br />
2<br />
3<br />
4<br />
5<br />
6<br />
7<br />
8<br />
9<br />
10<br />
11<br />
12<br />
13<br />
14<br />
15<br />
16<br />
17<br />
18<br />
19<br />
20<br />
21<br />
22<br />
23<br />
24<br />
25<br />
26<br />
27<br />
28<br />
29<br />
30<br />
31<br />
发现33473并没有被服务占用。<br />
4.查看代理服务</p>
<p>export | grep -i proxy<br />
HTTPS_PROXY=http://127.0.0.1:33473/<br />
HTTP_PROXY=http://127.0.0.1:33473/<br />
NO_PROXY=localhost,127.0.0.0/8,::1<br />
http_proxy=http://127.0.0.1:33473/<br />
https_proxy=http://127.0.0.1:33473/<br />
no_proxy=localhost,127.0.0.0/8,::1<br />
1<br />
2<br />
3<br />
4<br />
5<br />
6<br />
7<br />
终于找到原因了，33473端口被代理服务占用了，所以接下来要做的就是关闭这些代理。<br />
5. 使用unset关闭所有占用33474端口的代理<br />
比如unset HTTPS_PROXY<br />
注意，代理名称区分大小写.</p>
<p>——————————————————————————<br />
接下来，install终于没问题了，<br />
————————————————</p>
<p>原文链接：https://blog.csdn.net/xtfge0915/java/article/details/84977765</p>
]]></content:encoded>
			<wfw:commentRss>http://www.biofacebook.com/?feed=rss2&#038;p=1509</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>DBS  model training</title>
		<link>http://www.biofacebook.com/?p=1499</link>
		<comments>http://www.biofacebook.com/?p=1499#comments</comments>
		<pubDate>Thu, 14 May 2020 08:39:21 +0000</pubDate>
		<dc:creator><![CDATA[szypanther]]></dc:creator>
				<category><![CDATA[生物信息]]></category>

		<guid isPermaLink="false">http://www.biofacebook.com/?p=1499</guid>
		<description><![CDATA[<p>phenotype gproNOG.annotations bitscores</p> <p>This is the complete workflow used to generate a random forest model using output data from an hmmsearch of your protein coding genes against eggNOG gamma proteobacterial protein HMMs. Before running this notebook, run the parse_hmmsearch.pl script to get a tab-delimited file containing bitscores for all isolates.</p> <p>&#8220;`{r, read in data} # [...]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.biofacebook.com/wp-content/uploads/2020/05/phenotype.tsv">phenotype</a> <a href="http://www.biofacebook.com/wp-content/uploads/2020/05/gproNOG.annotations.tsv">gproNOG.annotations</a> <a href="http://www.biofacebook.com/wp-content/uploads/2020/05/bitscores.tsv">bitscores</a></p>
<p>This is the complete workflow used to generate a random forest model using output data from an hmmsearch of your protein coding genes against eggNOG gamma proteobacterial protein HMMs. Before running this notebook, run the parse_hmmsearch.pl script to get a tab-delimited file containing bitscores for all isolates.</p>
<p>&#8220;`{r, read in data}<br />
# library(gplots)<br />
library(caret)<br />
library(randomForest)<br />
set.seed(1)<br />
# set the directory you are working from<br />
directory &lt;- &#8220;/home/shenzy/UNISED/CII_ECOR_update_final_164genes&#8221;<br />
# Reading in the eggNOG model scores, checking to make sure that top eggNOG model hit for each protein in the orthogroup is the same<br />
traindata &lt;- read.delim(paste(directory, &#8220;/bitscores.tsv&#8221;, sep=&#8221;&#8221;))<br />
traindata &lt;- t(traindata)<br />
phenotype &lt;- read.delim(paste(directory, &#8220;/phenotype.tsv&#8221;, sep=&#8221;&#8221;), header=F)<br />
phenotype[,1] &lt;- make.names(phenotype[,1])<br />
traindata &lt;- cbind.data.frame(traindata, phenotype=phenotype[match(row.names(traindata), phenotype[,1]),2])<br />
traindata[is.na(traindata)] &lt;- 0<br />
# traindata &lt;- na.roughfix(traindata)<br />
traindata &lt;- traindata[,-nearZeroVar(traindata)]<br />
names(traindata) &lt;- make.names(names(traindata))<br />
&#8220;`</p>
<p>The following section is an optional step for picking the best values of mtry (number of gnees sampled per node) and ntree (number of trees in your random forest) for building your model. Instead of running all of the code at once, proceed through each step or model building and examine the figures produced. These will give you an indication of the point at which the performance of your model starts to level off.</p>
<p>In general, greater values of ntree and mtry will give you better stability in the top genes that are identified by the model. You can alternatively skip this step and proceed immediately to the next one, where values of 10,000 trees and p/10 genes per node (where p is total number of genes in the training data) have been chosen as a good starting point.</p>
<p>&#8220;`{r, train model}<br />
# this section is for picking out the best parameters for building your model<br />
set.seed(1)<br />
# varying ntree<br />
error &lt;- vector()<br />
sparsity &lt;- vector()<br />
for(i in c(1, 10, 50, 250, 500, 1000, 1500, 2000, 5000, 10000)) {<br />
model &lt;- randomForest(phenotype ~ ., data=traindata, ntree=i)<br />
error &lt;- c(error, model$err.rate[length(model$err.rate)])<br />
sparsity &lt;- c(sparsity, (sum(model$importance[,1]&lt;=0))/(ncol(traindata)-1))<br />
}<br />
# varying mtry<br />
error2 &lt;- vector()<br />
sparsity2 &lt;- vector()<br />
param &lt;- ncol(traindata)-1<br />
for(i in c(1, round(param/10), round(param/5), round(param/3), round(param/2), param)) {<br />
model &lt;- randomForest(phenotype ~ ., data=traindata, ntree=10000, mtry=i)<br />
error2 &lt;- c(error2, model$err.rate[length(model$err.rate)])<br />
sparsity2 &lt;- c(sparsity2, (sum(model$importance[,1]&lt;=0))/(ncol(traindata)-1))<br />
}<br />
model &lt;- randomForest(phenotype ~ ., data=traindata, ntree=10000, mtry=param/10, na.action=na.roughfix)<br />
png(paste(directory, &#8220;/model_training/m1_error_vs_ntree.png&#8221;, sep=&#8221;&#8221;), width=350, height = 350)<br />
plot(x=c(1, 10, 50, 250, 500, 1000, 1500, 2000, 5000, 10000), y=error, xlab=&#8221;Number of trees&#8221;, ylab=&#8221;OOB error rate&#8221;, pch=16)<br />
dev.off()<br />
png(paste(directory, &#8220;/model_training/m1_sparsity_vs_ntree.png&#8221;, sep=&#8221;&#8221;), width=350, height = 350)<br />
plot(x=c(1, 10, 50, 250, 500, 1000, 1500, 2000, 5000, 10000), y=sparsity, xlab=&#8221;Number of trees&#8221;, ylab=&#8221;% genes uninformative&#8221;, pch=16)<br />
dev.off()<br />
png(paste(directory, &#8220;/model_training/m1_error_vs_mtry.png&#8221;, sep=&#8221;&#8221;), width=350, height = 350)<br />
plot(x=c(1, round(param/10), round(param/5), round(param/3), round(param/2), param), y=error2, xlab=&#8221;Number of genes sampled per tree&#8221;, ylab=&#8221;OOB error rate&#8221;, pch=16)<br />
dev.off()<br />
png(paste(directory, &#8220;/model_training/m1_sparsity_vs_mtry.png&#8221;, sep=&#8221;&#8221;), width=350, height = 350)<br />
plot(x=c(1, round(param/10), round(param/5), round(param/3), round(param/2), param), y=sparsity2, xlab=&#8221;Number of genes sampled per tree&#8221;, ylab=&#8221;% genes uninformative&#8221;, pch=16)<br />
dev.off()<br />
train2 &lt;- traindata[,match(names(model$importance[model$importance[,1]&gt;0,]), colnames(traindata))]<br />
train2 &lt;- cbind(train2, phenotype=traindata$phenotype)<br />
error &lt;- vector()<br />
sparsity &lt;- vector()<br />
for(i in c(1, 10, 50, 250, 500, 1000, 1500, 2000, 5000, 10000)) {<br />
model &lt;- randomForest(phenotype ~ ., data=train2, ntree=i, na.action=na.roughfix)<br />
error &lt;- c(error, median(model$err.rate))<br />
sparsity &lt;- c(sparsity, (sum(model$importance[,1]&lt;=0))/(ncol(train2)-1))<br />
}<br />
error2 &lt;- vector()<br />
sparsity2 &lt;- vector()<br />
param &lt;- ncol(train2)-1<br />
for(i in c(1, round(param/10), round(param/5), round(param/3), round(param/2), param)) {<br />
model &lt;- randomForest(phenotype ~ ., data=train2, ntree=10000, mtry=i, na.action=na.roughfix)<br />
error2 &lt;- c(error2, median(model$err.rate))<br />
sparsity2 &lt;- c(sparsity2, (sum(model$importance[,1]&lt;=0))/(ncol(train2)-1))<br />
}<br />
model &lt;- randomForest(phenotype ~ ., data=train2, ntree=10000, mtry=param/10, na.action=na.roughfix)<br />
png(paste(directory, &#8220;/model_training/m2_error_vs_ntree.png&#8221;, sep=&#8221;&#8221;), width=350, height = 350)<br />
plot(x=c(1, 10, 50, 250, 500, 1000, 1500, 2000, 5000, 10000), y=error, xlab=&#8221;Number of trees&#8221;, ylab=&#8221;OOB error rate&#8221;, pch=16)<br />
dev.off()<br />
png(paste(directory, &#8220;/model_training/m2_sparsity_vs_ntree.png&#8221;, sep=&#8221;&#8221;), width=350, height = 350)<br />
plot(x=c(1, 10, 50, 250, 500, 1000, 1500, 2000, 5000, 10000), y=sparsity, xlab=&#8221;Number of trees&#8221;, ylab=&#8221;% genes uninformative&#8221;, pch=16)<br />
dev.off()<br />
png(paste(directory, &#8220;/model_training/m2_error_vs_mtry.png&#8221;, sep=&#8221;&#8221;), width=350, height = 350)<br />
plot(x=c(1, round(param/10), round(param/5), round(param/3), round(param/2), param), y=error2, xlab=&#8221;Number of genes sampled per tree&#8221;, ylab=&#8221;OOB error rate&#8221;, pch=16)<br />
dev.off()<br />
png(paste(directory, &#8220;/model_training/m2_sparsity_vs_mtry.png&#8221;, sep=&#8221;&#8221;), width=350, height = 350)<br />
plot(x=c(1, round(param/10), round(param/5), round(param/3), round(param/2), param), y=sparsity2, xlab=&#8221;Number of genes sampled per tree&#8221;, ylab=&#8221;% genes uninformative&#8221;, pch=16)<br />
dev.off()<br />
train3 &lt;- train2[,match(names(model$importance[model$importance[,1]&gt;quantile(model$importance[,1], 0.5),]), colnames(train2))]<br />
train3 &lt;- cbind(train3, phenotype=train2$phenotype)<br />
error &lt;- vector()<br />
sparsity &lt;- vector()<br />
param &lt;- ncol(train3)-1<br />
for(i in c(1, 10, 50, 250, 500, 1000, 1500, 2000, 5000, 10000)) {<br />
model &lt;- randomForest(phenotype ~ ., data=train3, ntree=i, na.action=na.roughfix)<br />
error &lt;- c(error, median(model$err.rate))<br />
sparsity &lt;- c(sparsity, (sum(model$importance[,1]&lt;=0))/(ncol(train3)-1))<br />
}<br />
error2 &lt;- vector()<br />
sparsity2 &lt;- vector()<br />
for(i in c(1, round(param/10), round(param/5), round(param/3), round(param/2), param)) {<br />
model &lt;- randomForest(phenotype ~ ., data=train3, ntree=10000, mtry=i, na.action=na.roughfix)<br />
error2 &lt;- c(error2, median(model$err.rate))<br />
sparsity2 &lt;- c(sparsity2, (sum(model$importance[,1]&lt;=0))/(ncol(train3)-1))<br />
}<br />
model &lt;- randomForest(phenotype ~ ., data=train3, ntree=10000, mtry=param/10, na.action=na.roughfix)<br />
png(paste(directory, &#8220;/model_training/m3_error_vs_ntree.png&#8221;, sep=&#8221;&#8221;), width=350, height = 350)<br />
plot(x=c(1, 10, 50, 250, 500, 1000, 1500, 2000, 5000, 10000), y=error, xlab=&#8221;Number of trees&#8221;, ylab=&#8221;OOB error rate&#8221;, pch=16)<br />
dev.off()<br />
png(paste(directory, &#8220;/model_training/m3_sparsity_vs_ntree.png&#8221;, sep=&#8221;&#8221;), width=350, height = 350)<br />
plot(x=c(1, 10, 50, 250, 500, 1000, 1500, 2000, 5000, 10000), y=sparsity, xlab=&#8221;Number of trees&#8221;, ylab=&#8221;% genes uninformative&#8221;, pch=16)<br />
dev.off()<br />
png(paste(directory, &#8220;/model_training/m3_error_vs_mtry.png&#8221;, sep=&#8221;&#8221;), width=350, height = 350)<br />
plot(x=c(1, round(param/10), round(param/5), round(param/3), round(param/2), param), y=error2, xlab=&#8221;Number of genes sampled per tree&#8221;, ylab=&#8221;OOB error rate&#8221;, pch=16)<br />
dev.off()<br />
png(paste(directory, &#8220;/model_training/m3_sparsity_vs_mtry.png&#8221;, sep=&#8221;&#8221;), width=350, height = 350)<br />
plot(x=c(1, round(param/10), round(param/5), round(param/3), round(param/2), param), y=sparsity2, xlab=&#8221;Number of genes sampled per tree&#8221;, ylab=&#8221;% genes uninformative&#8221;, pch=16)<br />
dev.off()<br />
train4 &lt;- train3[,match(names(model$importance[model$importance[,1]&gt;quantile(model$importance[,1], 0.5),]), colnames(train3))]<br />
train4 &lt;- cbind(train4, phenotype=train3$phenotype)<br />
error &lt;- vector()<br />
sparsity &lt;- vector()<br />
for(i in c(1, 10, 50, 250, 500, 1000, 1500, 2000, 5000, 10000)) {<br />
model &lt;- randomForest(phenotype ~ ., data=train4, ntree=i, na.action=na.roughfix)<br />
error &lt;- c(error, median(model$err.rate))<br />
sparsity &lt;- c(sparsity, (sum(model$importance[,1]&lt;=0))/(ncol(train4)-1))<br />
}<br />
error2 &lt;- vector()<br />
sparsity2 &lt;- vector()<br />
param &lt;- ncol(train4)-1<br />
for(i in c(1, round(param/10), round(param/5), round(param/3), round(param/2), param)) {<br />
model &lt;- randomForest(phenotype ~ ., data=train4, ntree=10000, mtry=i, na.action=na.roughfix)<br />
error2 &lt;- c(error2, median(model$err.rate))<br />
sparsity2 &lt;- c(sparsity2, (sum(model$importance[,1]&lt;=0))/(ncol(train4)-1))<br />
}<br />
model &lt;- randomForest(phenotype ~ ., data=train4, ntree=10000, mtry=param/10, na.action=na.roughfix)<br />
png(paste(directory, &#8220;/model_training/m4_error_vs_ntree.png&#8221;, sep=&#8221;&#8221;), width=350, height = 350)<br />
plot(x=c(1, 10, 50, 250, 500, 1000, 1500, 2000, 5000, 10000), y=error, xlab=&#8221;Number of trees&#8221;, ylab=&#8221;OOB error rate&#8221;, pch=16)<br />
dev.off()<br />
png(paste(directory, &#8220;/model_training/m4_sparsity_vs_ntree.png&#8221;, sep=&#8221;&#8221;), width=350, height = 350)<br />
plot(x=c(1, 10, 50, 250, 500, 1000, 1500, 2000, 5000, 10000), y=sparsity, xlab=&#8221;Number of trees&#8221;, ylab=&#8221;% genes uninformative&#8221;, pch=16)<br />
dev.off()<br />
png(paste(directory, &#8220;/model_training/m4_error_vs_mtry.png&#8221;, sep=&#8221;&#8221;), width=350, height = 350)<br />
plot(x=c(1, round(param/10), round(param/5), round(param/3), round(param/2), param), y=error2, xlab=&#8221;Number of genes sampled per tree&#8221;, ylab=&#8221;OOB error rate&#8221;, pch=16)<br />
dev.off()<br />
png(paste(directory, &#8220;/model_training/m4_sparsity_vs_mtry.png&#8221;, sep=&#8221;&#8221;), width=350, height = 350)<br />
plot(x=c(1, round(param/10), round(param/5), round(param/3), round(param/2), param), y=sparsity2, xlab=&#8221;Number of genes sampled per tree&#8221;, ylab=&#8221;% genes uninformative&#8221;, pch=16)<br />
dev.off()<br />
model$predicted<br />
names(model$importance[order(model$importance, decreasing=T),][1:10])<br />
save(model, train2, train3, train4, file=paste(directory, &#8220;/traindatanew.Rdata&#8221;, sep=&#8221;&#8221;))<br />
&#8220;`</p>
<p>This section allows you to build a model through iterative feature selection, using parameters that we feel are sensible. You can substitute in your own parameters chosen from the process above if you prefer.</p>
<p>&#8220;`{r, quicker model building}<br />
set.seed(1)<br />
param &lt;- ncol(traindata)-1<br />
model1 &lt;- randomForest(phenotype ~ ., data=traindata, ntree=10000, mtry=param/10, na.action=na.roughfix)<br />
model1<br />
png(paste(directory, &#8220;/VI_fullnew.png&#8221;, sep=&#8221;&#8221;), width=400, height=350)<br />
plot(1:param, model1$importance[order(model1$importance, decreasing=T)], xlim=c(1,1000), ylab=&#8221;Variable importance&#8221;, xlab=&#8221;Top genes&#8221;)<br />
dev.off()<br />
pdf(paste(directory, &#8220;VI_fullnew.pdf&#8221;, sep=&#8221;&#8221;), width=5, height=5)<br />
plot(1:param, model1$importance[order(model1$importance, decreasing=T)], xlim=c(1,1000), ylab=&#8221;Variable importance&#8221;, xlab=&#8221;Top genes&#8221;)<br />
dev.off()<br />
train2 &lt;- traindata[,match(names(model1$importance[model1$importance[,1]&gt;0,]), colnames(traindata))]<br />
train2 &lt;- cbind(train2, phenotype=traindata$phenotype)<br />
param &lt;- ncol(train2)-1<br />
model2 &lt;- randomForest(phenotype ~ ., data=train2, ntree=10000, mtry=param/10, na.action=na.roughfix)<br />
model2<br />
train3 &lt;- train2[,match(names(model2$importance[model2$importance[,1]&gt;quantile(model2$importance[,1], 0.5),]), colnames(train2))]<br />
train3 &lt;- cbind(train3, phenotype=train2$phenotype)<br />
param &lt;- ncol(train3)-1<br />
model3 &lt;- randomForest(phenotype ~ ., data=train3, ntree=10000, mtry=param/10, na.action=na.roughfix)<br />
model3<br />
train4 &lt;- train3[,match(names(model3$importance[model3$importance[,1]&gt;quantile(model3$importance[,1], 0.5),]), colnames(train3))]<br />
train4 &lt;- cbind(train4, phenotype=train3$phenotype)<br />
param &lt;- ncol(train4)-1<br />
model4 &lt;- randomForest(phenotype ~ ., data=train4, ntree=10000, mtry=param/10, na.action=na.roughfix)<br />
model4<br />
train5 &lt;- train4[,match(names(model4$importance[model4$importance[,1]&gt;quantile(model4$importance[,1], 0.5),]), colnames(train4))]<br />
train5 &lt;- cbind(train5, phenotype=train4$phenotype)<br />
param &lt;- ncol(train5)-1<br />
model5 &lt;- randomForest(phenotype ~ ., data=train5, ntree=10000, mtry=param/10, na.action=na.roughfix, proximity=T)<br />
model5<br />
train6 &lt;- train5[,match(names(model5$importance[model5$importance[,1]&gt;quantile(model5$importance[,1], 0.5),]), colnames(train5))]<br />
train6 &lt;- cbind(train6, phenotype=train5$phenotype)<br />
param &lt;- ncol(train6)-1<br />
model6 &lt;- randomForest(phenotype ~ ., data=train6, ntree=10000, mtry=param/10, na.action=na.roughfix, proximity=T)<br />
model6<br />
train7 &lt;- train6[,match(names(model6$importance[model6$importance[,1]&gt;quantile(model6$importance[,1], 0.5),]), colnames(train6))]<br />
train7 &lt;- cbind(train7, phenotype=train6$phenotype)<br />
param &lt;- ncol(train7)-1<br />
model7 &lt;- randomForest(phenotype ~ ., data=train7, ntree=10000, mtry=param/10, na.action=na.roughfix, proximity=T)<br />
model7<br />
model7$predicted<br />
names(model7$importance[order(model7$importance[,1], decreasing=T),])[1:10]<br />
png(paste(directory, &#8220;final_model_VInew.png&#8221;, sep=&#8221;&#8221;), width=400, height=350)</p>
<p>plot(1:param, model7$importance[order(model7$importance, decreasing=T),], xlab=&#8221;&#8221;, ylab=&#8221;Variable importance&#8221;)<br />
dev.off()<br />
save(model1, model2, model3, model4, model5,model6, model7, traindata, train2, train3, train4, train5, train6, train7, file=paste(directory, &#8220;finalmodelnew.Rdata&#8221;, sep=&#8221;&#8221;))<br />
&#8220;`</p>
<p>The following section will show you how the performance of your model has improved as you iterated through cycles of feature selection. It will give you an idea of whether you have performed enough cycles, or whether you need to carry on.</p>
<p>You can look at both the out-of-bag votes, which are just votes cast by trees on data they were not trained on. This gives you a good idea of how the model would score strains that are distantly related to your training data. The second plot shows you votes cast by all of the trees on all of the serovars, and gives you a better idea of how your model would scores similar strains.</p>
<p>&#8220;`{r, looking at votes}<br />
votedata &lt;- rbind.data.frame(cbind.data.frame(model=rep(&#8220;1&#8243;, 9), Serovar=row.names(model1$votes), Invasive=model1$votes[,2]), cbind.data.frame(model=rep(&#8220;2&#8243;, 9), Serovar=row.names(model1$votes), Invasive=model2$votes[,2]), cbind.data.frame(model=rep(&#8220;3&#8243;, 9), Serovar=row.names(model1$votes), Invasive=model3$votes[,2]), cbind.data.frame(model=rep(&#8220;4&#8243;, 9), Serovar=row.names(model1$votes), Invasive=model4$votes[,2]), cbind.data.frame(model=rep(&#8220;5&#8243;, 9), Serovar=row.names(model1$votes), Invasive=model5$votes[,2]), cbind.data.frame(model=rep(&#8220;6&#8243;, 9), Serovar=row.names(model1$votes), Invasive=model6$votes[,2]), cbind.data.frame(model=rep(&#8220;7&#8243;, 9), Serovar=row.names(model1$votes), Invasive=model7$votes[,2]))<br />
votedata$Phenotype &lt;- phenotype[match(votedata$Serovar, phenotype[,1]),2]<br />
ggplot(votedata, aes(x=model, y=Invasive, col=Phenotype)) + geom_jitter(width=0.1) + theme_classic(8) + ylab(&#8220;Proportion of votes for seletected strains phenotype&#8221;) + xlab(&#8220;Model iteration&#8221;) + geom_hline(yintercept=0.5, lty=2, col=&#8221;grey&#8221;) + theme(legend.key.size = unit(0.3, &#8220;cm&#8221;))<br />
ggsave(&#8220;votesnew.png&#8221;, width=7, height=2.5)<br />
ggsave(&#8220;votesnew.pdf&#8221;, width=7, height=2.5)<br />
votedata$Phenotype &lt;- phenotype[match(votedata$Serovar, phenotype[,1]),2]<br />
ggplot(votedata, aes(x=model, y=Invasive, col=Serovar)) + geom_jitter(width=0.1) + theme_classic(8) + ylab(&#8220;Proportion of votes for seletected strains phenotype&#8221;) + xlab(&#8220;Model iteration&#8221;) + geom_hline(yintercept=0.5, lty=2, col=&#8221;grey&#8221;) + theme(legend.key.size = unit(0.3, &#8220;cm&#8221;))<br />
ggsave(&#8220;votes.png&#8221;, width=7, height=2.5)<br />
ggsave(&#8220;votes.pdf&#8221;, width=7, height=2.5)<br />
fullvotes &lt;- rbind.data.frame(cbind.data.frame(model=rep(&#8220;1&#8243;, 9), Serovar=row.names(model1$votes), Invasive=predict(model1, traindata, type=&#8221;vote&#8221;)[,2]), cbind.data.frame(model=rep(&#8220;2&#8243;, 9), Serovar=row.names(model1$votes), Invasive=predict(model2, train2, type=&#8221;vote&#8221;)[,2]), cbind.data.frame(model=rep(&#8220;3&#8243;, 9), Serovar=row.names(model1$votes), Invasive=predict(model3, train3, type=&#8221;vote&#8221;)[,2]), cbind.data.frame(model=rep(&#8220;4&#8243;, 9), Serovar=row.names(model1$votes), Invasive=predict(model4, train4, type=&#8221;vote&#8221;)[,2]), cbind.data.frame(model=rep(&#8220;5&#8243;, 9), Serovar=row.names(model1$votes), Invasive=predict(model5, train5, type=&#8221;vote&#8221;)[,2]), cbind.data.frame(model=rep(&#8220;6&#8243;, 9), Serovar=row.names(model1$votes), Invasive=predict(model6, train6, type=&#8221;vote&#8221;)[,2]), cbind.data.frame(model=rep(&#8220;7&#8243;, 9), Serovar=row.names(model1$votes), Invasive=predict(model7, train7, type=&#8221;vote&#8221;)[,2]))<br />
fullvotes$Phenotype &lt;- phenotype[match(fullvotes$Serovar, phenotype[,1]),2]<br />
ggplot(fullvotes, aes(x=model, y=Invasive, col=Phenotype)) + geom_jitter(width=0.1) + theme_classic(8) + ylab(&#8220;Proportion of votes for invasive phenotype&#8221;) + xlab(&#8220;Model iteration&#8221;) + geom_hline(yintercept=0.5, lty=2, col=&#8221;grey&#8221;) + theme(legend.key.size = unit(0.3, &#8220;cm&#8221;))<br />
ggsave(&#8220;full_votesnew.png&#8221;, width=7, height=2.5)<br />
ggsave(&#8220;full_votesnew.pdf&#8221;, width=7, height=2.5)<br />
&#8220;`</p>
<p>This section shows you how frequently particular genes are identified as top predictors across different iterations of model building, and how much importance they are assigned. Because model building involved a lot of stochasticity, you may find that your top predictors are quite subject to change across iterations.</p>
<p>&#8220;`{r, assessing stability of predictors}<br />
set.seed(1)<br />
usefulgenes &lt;- data.frame()<br />
topgenes &lt;- data.frame()<br />
for(i in 1:10) {<br />
model &lt;- randomForest(phenotype ~ ., data=traindata, ntree=10000, mtry=param/10, na.action=na.roughfix)<br />
usefulgenes &lt;- rbind(usefulgenes, cbind(model=i, gene=names(model$importance[model$importance&gt;0,]), model$importance[model$importance&gt;0]))<br />
topgenes &lt;- rbind(topgenes, cbind(model=i, gene=names(model$importance[order(model$importance, decreasing=T),][1:20]), model$importance[order(model$importance, decreasing=T),][1:20]))<br />
}<br />
png(paste(directory, &#8220;/gene_usefulnessnew.png&#8221;, sep=&#8221;&#8221;))<br />
hist(table(usefulgenes$gene), col=&#8221;grey&#8221;, main=&#8221;&#8221;, xlab=&#8221;Number of times each gene was useful (VI &gt;) in a model&#8221;)<br />
dev.off()<br />
sum(table(usefulgenes$gene)==10)<br />
sum(table(usefulgenes$gene)&lt;10)<br />
topgenes$V3 &lt;- as.numeric(as.character(topgenes$V3))<br />
ggplot(topgenes, aes(x=model, y=gene, fill=as.numeric(V3))) + geom_tile() + ggtitle(&#8220;Imporance values for top genes across model iterations&#8221;) + scale_fill_continuous(&#8220;Importance&#8221;)<br />
table(topgenes$gene)<br />
png(paste(directory, &#8220;/topgenesnew.png&#8221;, sep=&#8221;&#8221;))<br />
hist(table(topgenes$gene), col=&#8221;grey&#8221;, main=&#8221;&#8221;, xlab=&#8221;Number of times each gene appeared in the\ntop 20 predictors&#8221;)<br />
dev.off()<br />
save(usefulgenes, topgenes, file=paste(directory, &#8220;/allgenemodels.Rdata&#8221;, sep=&#8221;&#8221;))<br />
&#8220;`</p>
<p>This section allows you to look at the COG categories of your top predictor genes, compared to the COG categories in your original training set, to see if any are over-represented.</p>
<p>&#8220;`{r, COG analysis}<br />
annotations &lt;- read.delim(paste(directory, &#8220;/gproNOG.annotations.tsv&#8221;, sep=&#8221;&#8221;), header=F) # matcing annotation file for your chosen model set can be downloaded from http://eggnogdb.embl.de/#/app/downloads<br />
nogs &lt;- read.delim(paste(directory, &#8220;/models_used_20ref.tsv&#8221;, sep=&#8221;&#8221;), header=F)<br />
nogs[,2] &lt;- sub(&#8220;.*NOG\\.&#8221;, &#8220;&#8221;, nogs[,2])<br />
nogs[,2] &lt;- sub(&#8220;.meta_raw&#8221;, &#8220;&#8221;, nogs[,2])<br />
info &lt;- annotations[match(nogs[,2], annotations[,2]),]<br />
info$V5 &lt;- as.character(info$V5)<br />
# proportion of each COG catergory that comes up in the top indicators<br />
background &lt;- nogs[match(colnames(traindata), make.names(nogs[,1])),2]<br />
background_COGs &lt;- annotations[match(background, annotations[,2]),5]<br />
library(plyr)<br />
bg_COGs2 &lt;- aaply(as.character(background_COGs), 1, function(i){<br />
if(!is.na(i)) {<br />
if(nchar(i)&gt;1) {<br />
char &lt;- substr(i,1,1)<br />
for(n in 2:nchar(i)) {<br />
char &lt;- paste(char,substr(i,n,n),sep=&#8221;.&#8221;)<br />
}<br />
return(char)<br />
} else {<br />
return(i)<br />
}<br />
} else{<br />
return(NA)<br />
}<br />
})</p>
<p>background_COGs2 &lt;- unlist(strsplit(bg_COGs2, &#8220;[.]&#8221;))<br />
predictors &lt;- nogs[match(colnames(train6), make.names(nogs[,1])),2]<br />
predictor_COGs &lt;- annotations[match(predictors, annotations[,2]),5]<br />
p_COGs2 &lt;- aaply(as.character(predictor_COGs), 1, function(i){<br />
if(!is.na(i)) {<br />
if(nchar(i)&gt;1) {<br />
char &lt;- substr(i,1,1)<br />
for(n in 2:nchar(i)) {<br />
char &lt;- paste(char,substr(i,n,n),sep=&#8221;.&#8221;)<br />
}<br />
return(char)<br />
} else {<br />
return(i)<br />
}<br />
} else{<br />
return(NA)<br />
}<br />
})<br />
predictor_COGs2 &lt;- unlist(strsplit(p_COGs2, &#8220;[.]&#8221;))<br />
barplot(rbind(table(background_COGs2), table(predictor_COGs2)[match(names(table(background_COGs2)), names(table(predictor_COGs2)))]))<br />
&#8220;`</p>
<p>This section allows you to compare the performance of your model predicting true phenotype and a random phenotype.</p>
<p>&#8220;`{r, control}<br />
set.seed(2)<br />
control &lt;- traindata<br />
control$phenotype &lt;- sample(traindata$phenotype)<br />
param &lt;- ncol(control)-1<br />
cmodel1 &lt;- randomForest(phenotype ~ ., data=control, ntree=10000, mtry=param/10, na.action=na.roughfix)<br />
cmodel1<br />
png(paste(directory, &#8220;/VI_full_control.png&#8221;, sep=&#8221;&#8221;), width=400, height=350)<br />
plot(1:param, cmodel1$importance[order(cmodel1$importance, decreasing=T)], xlim=c(1,1000), ylab=&#8221;Variable importance&#8221;, xlab=&#8221;Top genes&#8221;)<br />
dev.off()<br />
pdf(paste(directory, &#8220;VI_full_control.pdf&#8221;, sep=&#8221;&#8221;), width=5, height=5)<br />
plot(1:param, cmodel1$importance[order(cmodel1$importance, decreasing=T)], xlim=c(1,1000), ylab=&#8221;Variable importance&#8221;, xlab=&#8221;Top genes&#8221;)<br />
dev.off()<br />
ctrain2 &lt;- control[,match(names(cmodel1$importance[cmodel1$importance[,1]&gt;0,]), colnames(control))]<br />
ctrain2 &lt;- cbind(ctrain2, phenotype=control$phenotype)<br />
param &lt;- ncol(ctrain2)-1<br />
cmodel2 &lt;- randomForest(phenotype ~ ., data=ctrain2, ntree=10000, mtry=param/10, na.action=na.roughfix)<br />
cmodel2<br />
ctrain3 &lt;- ctrain2[,match(names(cmodel2$importance[cmodel2$importance[,1]&gt;quantile(cmodel2$importance[,1], 0.5),]), colnames(ctrain2))]<br />
ctrain3 &lt;- cbind(ctrain3, phenotype=ctrain2$phenotype)<br />
param &lt;- ncol(ctrain3)-1<br />
cmodel3 &lt;- randomForest(phenotype ~ ., data=ctrain3, ntree=10000, mtry=param/10, na.action=na.roughfix)<br />
cmodel3<br />
ctrain4 &lt;- ctrain3[,match(names(cmodel3$importance[cmodel3$importance[,1]&gt;quantile(cmodel3$importance[,1], 0.5),]), colnames(ctrain3))]<br />
ctrain4 &lt;- cbind(ctrain4, phenotype=ctrain3$phenotype)<br />
param &lt;- ncol(ctrain4)-1<br />
cmodel4 &lt;- randomForest(phenotype ~ ., data=ctrain4, ntree=10000, mtry=param/10, na.action=na.roughfix)<br />
cmodel4<br />
ctrain5 &lt;- ctrain4[,match(names(cmodel4$importance[cmodel4$importance[,1]&gt;quantile(cmodel4$importance[,1], 0.5),]), colnames(ctrain4))]<br />
ctrain5 &lt;- cbind(ctrain5, phenotype=ctrain4$phenotype)<br />
param &lt;- ncol(ctrain5)-1<br />
cmodel5 &lt;- randomForest(phenotype ~ ., data=ctrain5, ntree=10000, mtry=param/10, na.action=na.roughfix, proximity=T)<br />
cmodel5<br />
ctrain6 &lt;- ctrain5[,match(names(cmodel5$importance[cmodel5$importance[,1]&gt;quantile(cmodel5$importance[,1], 0.5),]), colnames(ctrain5))]<br />
ctrain6 &lt;- cbind(ctrain6, phenotype=ctrain5$phenotype)<br />
param &lt;- ncol(ctrain6)-1<br />
cmodel6 &lt;- randomForest(phenotype ~ ., data=ctrain6, ntree=10000, mtry=param/10, na.action=na.roughfix, proximity=T)<br />
cmodel6</p>
<p>cmodel6$predicted<br />
names(cmodel6$importance[order(cmodel6$importance[,1], decreasing=T),])[1:10]<br />
# compare votes and phenotypes for the control and real datasets<br />
cbind(cmodel6$votes, control$phenotype, model6$votes, train6$phenotype)</p>
<p>&#8220;`</p>
<p>This approach repeats the full model-building process 5 times to see how similar the top predictor genes are.</p>
<p>&#8220;`{r, testing the robustness of the top result}<br />
topgenes &lt;- vector()<br />
# picking out the top predictors from the model<br />
for(i in 1:5) {<br />
set.seed(i)<br />
error &lt;- vector()<br />
sparsity &lt;- vector()<br />
for(i in c(1, 10, 50, 250, 500, 1000, 1500, 2000, 5000, 10000)) {<br />
model &lt;- randomForest(phenotype ~ ., data=traindata, ntree=i, na.action=na.roughfix)<br />
error &lt;- c(error, median(model$err.rate))<br />
sparsity &lt;- c(sparsity, (sum(model$importance[,1]&lt;=0))/(ncol(traindata)-1))<br />
}<br />
error2 &lt;- vector()<br />
sparsity2 &lt;- vector()<br />
param &lt;- ncol(control)-1<br />
for(i in c(1, param/10, param/5, param/3, param/2, param)) {<br />
model &lt;- randomForest(phenotype ~ ., data=traindata, ntree=1000, mtry=i, na.action=na.roughfix)<br />
error2 &lt;- c(error2, median(model$err.rate))<br />
sparsity2 &lt;- c(sparsity2, (sum(model$importance[,1]&lt;=0))/(ncol(traindata)-1))<br />
}<br />
train2 &lt;- traindata[,match(names(model$importance[model$importance[,1]&gt;0,]), colnames(traindata))]<br />
train2 &lt;- cbind(train2, phenotype=traindata$phenotype)<br />
error &lt;- vector()<br />
sparsity &lt;- vector()<br />
for(i in c(1, 10, 50, 250, 500, 1000, 1500, 2000, 5000, 10000)) {<br />
model &lt;- randomForest(phenotype ~ ., data=train2, ntree=i, na.action=na.roughfix)<br />
error &lt;- c(error, median(model$err.rate))<br />
sparsity &lt;- c(sparsity, (sum(model$importance[,1]&lt;=0))/(ncol(train2)-1))<br />
}<br />
error2 &lt;- vector()<br />
sparsity2 &lt;- vector()<br />
param &lt;- ncol(control)-1<br />
for(i in c(1, param/10, param/5, param/3, param/2, param)) {<br />
model &lt;- randomForest(phenotype ~ ., data=train2, ntree=20000, mtry=i, na.action=na.roughfix)<br />
error2 &lt;- c(error2, median(model$err.rate))<br />
sparsity2 &lt;- c(sparsity2, (sum(model$importance[,1]&lt;=0))/(ncol(train2)-1))<br />
}<br />
train3 &lt;- train2[,match(names(model$importance[model$importance[,1]&gt;quantile(model$importance[,1], 0.5),]), colnames(train2))]<br />
train3 &lt;- cbind(train3, phenotype=train2$phenotype)<br />
error &lt;- vector()<br />
sparsity &lt;- vector()<br />
for(i in c(1, 10, 50, 250, 500, 1000, 1500, 2000, 5000, 10000)) {<br />
model &lt;- randomForest(phenotype ~ ., data=train3, ntree=i, na.action=na.roughfix)<br />
error &lt;- c(error, median(model$err.rate))<br />
sparsity &lt;- c(sparsity, (sum(model$importance[,1]&lt;=0))/(ncol(train3)-1))<br />
}<br />
error2 &lt;- vector()<br />
sparsity2 &lt;- vector()<br />
param &lt;- ncol(control)-1<br />
for(i in c(1, param/10, param/5, param/3, param/2, param)) {<br />
model &lt;- randomForest(phenotype ~ ., data=train3, ntree=2000, mtry=i, na.action=na.roughfix)<br />
error2 &lt;- c(error2, median(model$err.rate))<br />
sparsity2 &lt;- c(sparsity2, (sum(model$importance[,1]&lt;=0))/(ncol(train3)-1))<br />
}<br />
train4 &lt;- train3[,match(names(model$importance[model$importance[,1]&gt;quantile(model$importance[,1], 0.5),]), colnames(train3))]<br />
train4 &lt;- cbind(train4, phenotype=train3$phenotype)<br />
topgenes &lt;- c(topgenes, colnames(train4))<br />
}<br />
table(topgenes)</p>
]]></content:encoded>
			<wfw:commentRss>http://www.biofacebook.com/?feed=rss2&#038;p=1499</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>微生物多样研究—差异分析</title>
		<link>http://www.biofacebook.com/?p=1483</link>
		<comments>http://www.biofacebook.com/?p=1483#comments</comments>
		<pubDate>Fri, 27 Mar 2020 07:29:11 +0000</pubDate>
		<dc:creator><![CDATA[szypanther]]></dc:creator>
				<category><![CDATA[生物信息]]></category>
		<category><![CDATA[统计学习]]></category>

		<guid isPermaLink="false">http://www.biofacebook.com/?p=1483</guid>
		<description><![CDATA[ <p>1. 随机森林模型</p> <p>随机森林是一种基于决策树（Decisiontree）的高效的机器学习算法，可以用于对样本进行分类（Classification），也可以用于回归分析（Regression）。</p> <p>它属于非线性分类器，因此可以挖掘变量之间复杂的非线性的相互依赖关系。通过随机森林分析，可以找出能够区分两组样本间差异关键OTU。</p> <p>Feature Importance Scores表格-来源于随机森林结果</p> <p>记录了各OTU对组间差异的贡献值大小。</p> </p> <p>注：一般地，选取Mean_decrease_in_accuracy值大于0.05的OTU，作进一步分析；对于组间差异较小的样本，该值可能会降至0.03。</p> <p>2. 交叉验证分析</p> <p>交叉验证（Crossvalidation)，是一种统计学上将数据样本切割成较小子集的实用方法。先在一个子集上做分析，而其它子集则用来做后续对此分析的确认及验证。一开始的子集被称为训练集。而其它的子集则被称为验证集或测试集。</p> <p>其中最常见的为k-foldercross-validation，它指的是将所有数据分成k个子集，每个子集均做一次测试集，其余的作为训练集。交叉验证重复k次，每次选择一个子集作为测试集，并将k次的平均交叉验证识别正确率作为结果。</p> <p>所有的样本都被作为了训练集和测试集，每个样本都被验证一次。</p> <p>对随机森林方法筛选出的关键OTU的组合进行遍历，以期用最少的OTU数目组合构建一个错误率最低高效分类器。</p> <p>一般地，对随机森林分析筛选出的关键OTU，按照不同组合进行10倍交叉验证分析，找出能够最准确区分组间差异的最少的OTU组合，再做进一步的分析，如ROC分析等。</p> <p>注：图中横坐标表示不同数量的OTU组合，纵坐标表示该数量OTU组合下分类的错误率。OTU组合数越少，且错误率越低，则该OTU组合被认为是能够区分组间差异的最少的OTU组合。</p> <p>3. ROC曲线</p> <p>接收者操作特征曲线（Receiveroperating characteristic curve，ROC 曲线）也是一种有效的有监督学习方法。ROC分析属于二元分类算法，用来处理只有两种分类的问题，可以用于选择最佳的判别模型,选择最佳的诊断界限值。</p> <p>可依据专业知识，对疾病组和参照组测定结果进行分析，确定测定值的上下限、组距以及截断点(cut-offpoint)，按选择的组距间隔列出累积频数分布表，分别计算出所有截断点的敏感性(Sensetivity)、特异性和假阳性率(1-特异性:Specificity)。以敏感性为纵坐标代表真阳性率，(1-特异性)为横坐标代表假阳性率，作图绘成ROC曲线。ROC曲线越靠近左上角，诊断的准确性就越高。亦可通过分别计算各个试验的ROC曲线下的面积(AUC)进行比较，哪一种试验的AUC最大，则哪一种试验的诊断价值最佳。</p> <p></p> <p>注：图中横坐标为假阳性率false positive rate（FPR）：Specificity，纵坐标为真阳性率true positive rate（TPR）：Sensetivity。最靠近左上角的ROC曲线的点是错误最少的最好阈值，其假阳性和假阴性的总数最少。ROC曲线下的面积值在1.0和0.5之间。在AUC&#62;0.5的情况下，AUC越接近于1，说明诊断效果越好。AUC在 0.5~0.7时有较低准确性，AUC在0.7~0.9时有一定准确性，AUC在0.9以上时有较高准确性。AUC=0.5时，说明诊断方法完全不起作用，无诊断价值。AUC&#60;0.5不符合真实情况，在实际中极少出现。</p> <p>4. Wilcoxon秩和检验分析</p> <p>Wilcoxonrank-sum test，也叫曼-惠特尼U检验（Mann–WhitneyU test），是两组独立样本非参数检验的一种方法。其原假设为两组独立样本来自的两总体分布无显著差异，通过对两组样本平均秩的研究来实现判断两总体的分布是否存在差异，该分析可以对两组样品的物种进行显著性差异分析，并对p值计算假发现率（FDR）q值。</p> <p></p> <p>注：mean分别为两组样品物种的平均相对丰度，sd分别是两组样本物种相对丰度的标准差。P值为对两组检验原假设为真的概率值，p&#60;0.05表示存在差异，p&#60;0.01表示差异显著，q值为假发现率。</p> <p>5. 差异菌群Heatmap分析</p> <p>以10倍交叉验证（10-foldcross-validation）估计泛化误差（Generalizationerror）的大小，其余参数使用默认设置。建模结果同时包含“基线”误差（Baselineerror）的期望值，即数据集中属于最优势分类的样本全部被错误分类的概率。每个OTU根据其被移除后模型预报错误率增加的大小确定其重要度数值，重要度越高，该OTU对模型预报准确率的贡献越大。</p> <p>根据挑选出来的差异OTU，根据其在每个样品中的丰度信息，对物种进行聚类，绘制成热图，便于观察哪些物种在哪些样品中聚集较多或含量较低。</p> <p></p> <p>注：图中越接近蓝色表示物种丰度越低，越接近橙红色表示丰度越高。左边的聚类树是根据各物种间的spearman相关性距离进行聚类；上边的聚类树是采用样本间距离算法中最常用的Bray-Curtis算法进行聚类。</p> <p>6. 两组样本Welch’s t-test分析</p> <p>两组不同方差的样本可使用Welch’st-test进行差异比较分析，通过此分析可获得在两组中有显著性差异的物种[或差异基因丰度—适用于元（宏）基因组]。</p> <p> <p>注：上图所示为不同基因丰度（或物种）在两组样品中的丰度比例，中间所示为95%置信度区间内，物种丰度的差异比例，最右边的值为p值，p值＜0.05，表示差异显著。</p> <p>7. Shannon多样性指数比较盒状图</p> <p>将不同分类或环境的多组样本的Shannon多样性指数进行四分位计算，比较不同样本组的组间Shannon指数差异。同时进行非参数Mann-Whitney判断样本组间的显著性差异</p> [...]]]></description>
				<content:encoded><![CDATA[<div>
<div>
<p><b>1. 随机森林模型</b></p>
<p>随机森林是一种基于决策树（Decisiontree）的高效的机器学习算法，可以用于对样本进行分类（Classification），也可以用于回归分析（Regression）。</p>
<p>它属于非线性分类器，因此可以挖掘变量之间复杂的非线性的相互依赖关系。通过随机森林分析，可以找出能够区分两组样本间差异关键OTU。</p>
<p><i>Feature Importance Scores表格-来源于随机森林结果</i></p>
</div>
<p>记录了各OTU对组间差异的贡献值大小。</p></div>
<div><a href="http://www.biofacebook.com/wp-content/uploads/2020/03/18585978-a855cbdb5a069bb1.png"><img class="aligncenter size-large wp-image-1484" src="http://www.biofacebook.com/wp-content/uploads/2020/03/18585978-a855cbdb5a069bb1-1024x337.png" alt="18585978-a855cbdb5a069bb1" width="640" height="211" /></a></p>
<div>
<div>
<p>注：一般地，选取Mean_decrease_in_accuracy值大于0.05的OTU，作进一步分析；对于组间差异较小的样本，该值可能会降至0.03。</p>
<p><b>2. 交叉验证分析</b></p>
<p>交叉验证（Crossvalidation)，是一种统计学上将数据样本切割成较小子集的实用方法。先在一个子集上做分析，而其它子集则用来做后续对此分析的确认及验证。一开始的子集被称为训练集。而其它的子集则被称为验证集或测试集。</p>
<p>其中最常见的为k-foldercross-validation，它指的是将所有数据分成k个子集，每个子集均做一次测试集，其余的作为训练集。交叉验证重复k次，每次选择一个子集作为测试集，并将k次的平均交叉验证识别正确率作为结果。</p>
<p>所有的样本都被作为了训练集和测试集，每个样本都被验证一次。</p>
<p>对随机森林方法筛选出的关键OTU的组合进行遍历，以期用最少的OTU数目组合构建一个错误率最低高效分类器。</p>
<p>一般地，对随机森林分析筛选出的关键OTU，按照不同组合进行10倍交叉验证分析，找出能够最准确区分组间差异的最少的OTU组合，再做进一步的分析，如ROC分析等。</p>
</div>
</div>
<div><a href="http://www.biofacebook.com/wp-content/uploads/2020/03/18585978-ffceaf913a867414.png"><br />
<img class="aligncenter size-full wp-image-1485" src="http://www.biofacebook.com/wp-content/uploads/2020/03/18585978-ffceaf913a867414.png" alt="18585978-ffceaf913a867414" width="773" height="723" /></a></div>
<div>
<div>
<p>注：图中横坐标表示不同数量的OTU组合，纵坐标表示该数量OTU组合下分类的错误率。OTU组合数越少，且错误率越低，则该OTU组合被认为是能够区分组间差异的最少的OTU组合。</p>
<p><b>3. ROC曲线</b></p>
<p>接收者操作特征曲线（Receiveroperating characteristic curve，ROC 曲线）也是一种有效的有监督学习方法。ROC分析属于二元分类算法，用来处理只有两种分类的问题，可以用于选择最佳的判别模型,选择最佳的诊断界限值。</p>
<p>可依据专业知识，对疾病组和参照组测定结果进行分析，确定测定值的上下限、组距以及截断点(cut-offpoint)，按选择的组距间隔列出累积频数分布表，分别计算出所有截断点的敏感性(Sensetivity)、特异性和假阳性率(1-特异性:Specificity)。以敏感性为纵坐标代表真阳性率，(1-特异性)为横坐标代表假阳性率，作图绘成ROC曲线。ROC曲线越靠近左上角，诊断的准确性就越高。亦可通过分别计算各个试验的ROC曲线下的面积(AUC)进行比较，哪一种试验的AUC最大，则哪一种试验的诊断价值最佳。</p>
</div>
<p><a href="http://www.biofacebook.com/wp-content/uploads/2020/03/18585978-07a26b63fea21bf8.png"><img class="aligncenter size-full wp-image-1486" src="http://www.biofacebook.com/wp-content/uploads/2020/03/18585978-07a26b63fea21bf8.png" alt="18585978-07a26b63fea21bf8" width="685" height="688" /></a></p>
<div>
<div>
<p>注：图中横坐标为假阳性率false positive rate（FPR）：Specificity，纵坐标为真阳性率true positive rate（TPR）：Sensetivity。最靠近左上角的ROC曲线的点是错误最少的最好阈值，其假阳性和假阴性的总数最少。ROC曲线下的面积值在1.0和0.5之间。在AUC&gt;0.5的情况下，AUC越接近于1，说明诊断效果越好。AUC在 0.5~0.7时有较低准确性，AUC在0.7~0.9时有一定准确性，AUC在0.9以上时有较高准确性。AUC=0.5时，说明诊断方法完全不起作用，无诊断价值。AUC&lt;0.5不符合真实情况，在实际中极少出现。</p>
<p><b>4. Wilcoxon秩和检验分析</b></p>
<p>Wilcoxonrank-sum test，也叫曼-惠特尼U检验（Mann–WhitneyU test），是两组独立样本非参数检验的一种方法。其原假设为两组独立样本来自的两总体分布无显著差异，通过对两组样本平均秩的研究来实现判断两总体的分布是否存在差异，该分析可以对两组样品的物种进行显著性差异分析，并对p值计算假发现率（FDR）q值。</p>
</div>
<p><a href="http://www.biofacebook.com/wp-content/uploads/2020/03/18585978-6714c4d277105abd.png"><img class="aligncenter size-large wp-image-1487" src="http://www.biofacebook.com/wp-content/uploads/2020/03/18585978-6714c4d277105abd-1024x218.png" alt="18585978-6714c4d277105abd" width="640" height="136" /></a></p>
<div>
<div>
<p>注：mean分别为两组样品物种的平均相对丰度，sd分别是两组样本物种相对丰度的标准差。P值为对两组检验原假设为真的概率值，p&lt;0.05表示存在差异，p&lt;0.01表示差异显著，q值为假发现率。</p>
<p><b>5. 差异菌群Heatmap分析</b></p>
<p>以10倍交叉验证（10-foldcross-validation）估计泛化误差（Generalizationerror）的大小，其余参数使用默认设置。建模结果同时包含“基线”误差（Baselineerror）的期望值，即数据集中属于最优势分类的样本全部被错误分类的概率。每个OTU根据其被移除后模型预报错误率增加的大小确定其重要度数值，重要度越高，该OTU对模型预报准确率的贡献越大。</p>
<p>根据挑选出来的差异OTU，根据其在每个样品中的丰度信息，对物种进行聚类，绘制成热图，便于观察哪些物种在哪些样品中聚集较多或含量较低。</p>
</div>
<p><a href="http://www.biofacebook.com/wp-content/uploads/2020/03/18585978-391e366b1420fa92.png"><img class="aligncenter size-full wp-image-1488" src="http://www.biofacebook.com/wp-content/uploads/2020/03/18585978-391e366b1420fa92.png" alt="18585978-391e366b1420fa92" width="660" height="650" /></a></p>
<div>
<div>
<p>注：图中越接近蓝色表示物种丰度越低，越接近橙红色表示丰度越高。左边的聚类树是根据各物种间的spearman相关性距离进行聚类；上边的聚类树是采用样本间距离算法中最常用的Bray-Curtis算法进行聚类。</p>
<p><b>6. 两组样本Welch’s t-test分析</b></p>
<p>两组不同方差的样本可使用Welch’st-test进行差异比较分析，通过此分析可获得在两组中有显著性差异的物种[或差异基因丰度—适用于元（宏）基因组]。</p>
</div>
<p><a href="http://www.biofacebook.com/wp-content/uploads/2020/03/18585978-8453377e0336274c.png"><img class="aligncenter size-large wp-image-1489" src="http://www.biofacebook.com/wp-content/uploads/2020/03/18585978-8453377e0336274c-1024x531.png" alt="18585978-8453377e0336274c" width="640" height="332" /></a></div>
<div>
<div>
<div>
<p>注：上图所示为不同基因丰度（或物种）在两组样品中的丰度比例，中间所示为95%置信度区间内，物种丰度的差异比例，最右边的值为p值，p值＜0.05，表示差异显著。</p>
<p><b>7. Shannon多样性指数比较盒状图</b></p>
<p>将不同分类或环境的多组样本的Shannon多样性指数进行四分位计算，比较不同样本组的组间Shannon指数差异。同时进行非参数Mann-Whitney判断样本组间的显著性差异</p>
</div>
<p><a href="http://www.biofacebook.com/wp-content/uploads/2020/03/18585978-5c097b33070b52e3.png"><img class="aligncenter size-full wp-image-1490" src="http://www.biofacebook.com/wp-content/uploads/2020/03/18585978-5c097b33070b52e3.png" alt="18585978-5c097b33070b52e3" width="338" height="350" /></a></div>
<div>
<div>
<p>注：横坐标表示样本分组，纵坐标表示相对应的Alpha多样性指数值；图形可以显示5个统计量（最小值，第一个四分位，中位数，第三个中位数和最大值，及由下到上5条线）。p＜0.05，表示差异显著；P&lt;0.01，表示差异极显著。</p>
<p><b>8. 基于距离的箱式图</b></p>
<p>将不同分类或环境的多组样本的距离进行四分位计算，比较不同样本组的组内和组间的距离分布差异。同时进行multipleStudent&#8217;s two-sample t-tests判断样本组间差异的显著性。</p>
<p>箱式图的作用：识别数据异常值；粗略估计和判断数据特征；比较几批数据的形状，同一数轴上，几批数据的箱形图并行排列，几批数据的中位数、尾长、异常值、分布区间等形状信息一目了然。</p>
<p>箱线图（Boxplot）也称箱须图（Box-whiskerPlot），是利用数据中的五个统计量：最小值、第一四分位数、中位数、第三四分位数与最大值来描述数据的一种方法，它也可以粗略地看出数据是否具有对称性，分布的分散程度等信息，特别可以用于对几组样本的比较。简单箱线图由五部分组成，分别是最小值、中位数、最大值和两个四分位数。</p>
</div>
<p><a href="http://www.biofacebook.com/wp-content/uploads/2020/03/18585978-1605130e5d641edc.png"><img class="aligncenter size-full wp-image-1491" src="http://www.biofacebook.com/wp-content/uploads/2020/03/18585978-1605130e5d641edc.png" alt="18585978-1605130e5d641edc" width="740" height="682" /></a></div>
<div>
<div>
<div>
<p>注：第一四分位数 (Q1)，又称“下四分位数”，等于该样本中所有数值由小到大排列后第25%的数字。第二四分位数 (Q2)，又称“中位数”，等于该样本中所有数值由小到大排列后第50%的数字。 第三四分位数 (Q3)，又称“上四分位数”，等于该样本中所有数值由小到大排列后第75%的数字。</p>
<p><b>9. LEfSe分析</b></p>
<p>LEfSe是一种用于发现高维生物标识和揭示基因组特征的软件。包括基因，代谢和分类，用于区别两个或两个以上生物条件（或者是类群）。该算法强调的是统计意义和生物相关性。让研究人员能够识别不同丰度的特征以及相关联的类别。</p>
<p>LEfSe通过生物学统计差异使其具有强大的识别功能。然后，它执行额外的测试，以评估这些差异是否符合预期的生物学行为。</p>
<p>具体来说，首先使用non-parametric factorial Kruskal-Wallis (KW) sum-rank test（非参数因子克鲁斯卡尔—沃利斯和秩验检）检测具有显著丰度差异特征，并找到与丰度有显著性差异的类群。最后，LEfSe采用线性判别分析（LDA）来估算每个组分（物种）丰度对差异效果影响的大小。</p>
</div>
<p><a href="http://www.biofacebook.com/wp-content/uploads/2020/03/18585978-9453e2fea01fb73b.png"><img class="aligncenter size-large wp-image-1492" src="http://www.biofacebook.com/wp-content/uploads/2020/03/18585978-9453e2fea01fb73b-1024x377.png" alt="18585978-9453e2fea01fb73b" width="640" height="236" /></a></div>
<div>
<div>
<div>
<p>说明：左边的图为统计两个组别当中有显著作用的微生物类群通过LDA分析（线性回归分析）后获得的LDA分值。右边的图为聚类树，节点大小表示丰度，默认从门到属依次向外排列。红色区域和绿色区域表示不同分组，树枝中红色节点表示在红色组别中起到重要作用的微生物类群，绿色节点表示在绿色组别中起到重要作用的微生物类群，黄色节点表示的是在两组中均没有起到重要作用的微生物类群。图中英文字母表示的物种名称在右侧图例中进行展示。</p>
<p><b>10. ANOSIM相似性分析</b></p>
<p>相似性分析(ANOSIM)是一种非参数检验，用来检验组间（两组或多组）的差异是否显著大于组内差异，从而判断分组是否有意义。首先利用Bray-Curtis算法计算两两样品间的距离，然后将所有距离从小到大进行排序，按以下公式计算R值，之后将样品进行置换，重新计算R*值，R*大于R的概率即为P值。</p>
</div>
<p><a href="http://www.biofacebook.com/wp-content/uploads/2020/03/18585978-e0960fa6644a18ff.png"><img class="aligncenter size-full wp-image-1493" src="http://www.biofacebook.com/wp-content/uploads/2020/03/18585978-e0960fa6644a18ff.png" alt="18585978-e0960fa6644a18ff" width="454" height="135" /></a></p>
<div>
<div>
<p>其中，</p>
<p>r ̅ _b：表示组间（Between groups）距离排名的平均值；</p>
<p>r ̅ _w：表示组内（Within groups）距离排名的平均值；</p>
<p>n：表示样品总数。</p>
<p><b>Table. Anosimanalysis</b></p>
</div>
<p><a href="http://www.biofacebook.com/wp-content/uploads/2020/03/18585978-d456795bc0cddde8.png"><img class="aligncenter size-large wp-image-1494" src="http://www.biofacebook.com/wp-content/uploads/2020/03/18585978-d456795bc0cddde8-1024x344.png" alt="18585978-d456795bc0cddde8" width="640" height="215" /></a></div>
<div>
<div>
<div>
<p>注：理论上，R值范围为-1到+1，实际中R值一般从0到1，R值接近1表示组间差异越大于组内差异，R值接近0则表示组间和组内没有明显差异。P值则反映了分析结果的统计学显著性，P值越小，表明各样本分组之间的差异显著性越高，P&lt; 0.05表示统计具有显著性；Number of permutation表示置换次数。</p>
<p><b>11. Adonis多因素方差分析</b></p>
<p>Adonis又称置换多因素方差分析（permutationalMANOSVA）或非参数多因素方差分析（nonparametricMANOVA）。它利用半度量(如Bray-Curtis)或度量距离矩阵(如Euclidean)对总方差进行分解，分析不同分组因素对样品差异的解释度，并使用置换检验对划分的统计学意义进行显著性分析。</p>
<p><b>Table permutational MANOVA analysis</b></p>
</div>
<p><a href="http://www.biofacebook.com/wp-content/uploads/2020/03/18585978-7fa68baa52da9d13.png"><img class="aligncenter size-large wp-image-1495" src="http://www.biofacebook.com/wp-content/uploads/2020/03/18585978-7fa68baa52da9d13-1024x132.png" alt="18585978-7fa68baa52da9d13" width="640" height="83" /></a></p>
<div>
<div>
<p>注：</p>
<p>Group：表示分组；</p>
<p>Df：表示自由度；</p>
<p>SumsOfSqs：总方差，又称离差平方和；</p>
<p>MeanSqs：平均方差，即SumsOfSqs/Df；</p>
<p>F.Model：F检验值；</p>
<p>R2：表示不同分组对样品差异的解释度，即分组方差与总方差的比值，即分组所能解释的原始数据中差异的比例，R2越大表示分组对差异的解释度越高；</p>
<p>Pr(&gt;F)：通过置换检验获得的P值，P值越小，表明组间差异显著性越强。</p>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<p>作者：JarySun<br />
链接：https://www.jianshu.com/p/87f24cceaa43<br />
来源：简书<br />
著作权归作者所有。商业转载请联系作者获得授权，非商业转载请注明出处。</p></div>
]]></content:encoded>
			<wfw:commentRss>http://www.biofacebook.com/?feed=rss2&#038;p=1483</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>ANOSIM，PERMANOVA/Adonis，MRPP　（转贴）</title>
		<link>http://www.biofacebook.com/?p=1474</link>
		<comments>http://www.biofacebook.com/?p=1474#comments</comments>
		<pubDate>Fri, 27 Mar 2020 07:13:55 +0000</pubDate>
		<dc:creator><![CDATA[szypanther]]></dc:creator>
				<category><![CDATA[统计学习]]></category>

		<guid isPermaLink="false">http://www.biofacebook.com/?p=1474</guid>
		<description><![CDATA[<p></p> 1. ANOSIM 组间相似性分析 相似性分析(ANOSIM)是一种非参数检验，用来检验组间(两组或多组)的差异是否显著大于组内差异，从而判断分组是否有意义。首先利用 Bray-Curtis 算法计算两两样品间的距离，然后将所有距离从小到大进行排序， 按以下公式计算 R 值，之后将样品进行置换，重新计算 R值，R大于 R 的概率即为 P 值。 <p></p> 注:图上总共有 N+1 个盒子，N 为分组数量。“Between”的盒子指代的是分组之间的差异，其他分别代表各自组 内差异。R 值范围为-1 到+1，实际中 R 值一般从 0 到 1。R 值接近 1 表示组间差异越大于组内差异，R 值接近 0 则表示组间和组内没有明显差异;此次统计分析的可信度用 P-value 表示，P&#60; 0.05 表示统计具有显著性。 2. PERMANOVA/Adonis 置换多元方差分析 PERMANOVA (Permutational multivariate analysis of variance，置换多元方差分析)，又称 Adonis 分析，可利用半度量(如 Bray-Curtis)或度量距离矩阵(如 Euclidean)对总方差进行分解，通过线性模型分析不同分组因素 或环境因子(如临床表型数据、土壤理化指标等)对样品差异的解释度，并使用置换检验进行显著性分析。 <p> 3. MRPP [...]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.biofacebook.com/wp-content/uploads/2020/03/6634703-6fde5e9bfb6489c7.png"><img class="aligncenter size-large wp-image-1475" src="http://www.biofacebook.com/wp-content/uploads/2020/03/6634703-6fde5e9bfb6489c7-1024x419.png" alt="6634703-6fde5e9bfb6489c7" width="640" height="262" /></a></p>
<div>
<div>
<h4>1. ANOSIM 组间相似性分析</h4>
<ul>
<li>相似性分析(ANOSIM)是一种非参数检验，用来检验组间(两组或多组)的差异是否显著大于组内差异，从而判断分组是否有意义。首先利用 Bray-Curtis 算法计算两两样品间的距离，然后将所有距离从小到大进行排序， 按以下公式计算 R 值，之后将样品进行置换，重新计算 R<em>值，R</em>大于 R 的概率即为 P 值。
<div class="image-package"></div>
</li>
</ul>
</div>
<p><img class="" src="https://upload-images.jianshu.io/upload_images/6634703-ec94fa34c56b542a.png?imageMogr2/auto-orient/strip|imageView2/2/w/1200" alt="" data-original-src="//upload-images.jianshu.io/upload_images/6634703-ec94fa34c56b542a.png" data-original-width="1358" data-original-height="610" data-original-format="image/png" data-original-filesize="204109" data-image-index="1" /><a href="http://www.biofacebook.com/wp-content/uploads/2020/03/6634703-ec94fa34c56b542a.png"><img class="aligncenter size-large wp-image-1476" src="http://www.biofacebook.com/wp-content/uploads/2020/03/6634703-ec94fa34c56b542a-1024x460.png" alt="6634703-ec94fa34c56b542a" width="640" height="288" /></a><a href="http://www.biofacebook.com/wp-content/uploads/2020/03/6634703-01bd752421e6028e.png"><img class="aligncenter size-large wp-image-1477" src="http://www.biofacebook.com/wp-content/uploads/2020/03/6634703-01bd752421e6028e-1024x819.png" alt="6634703-01bd752421e6028e" width="640" height="512" /></a></p>
<div>
<div>注:图上总共有 N+1 个盒子，N 为分组数量。“Between”的盒子指代的是分组之间的差异，其他分别代表各自组 内差异。R 值范围为-1 到+1，实际中 R 值一般从 0 到 1。R 值接近 1 表示组间差异越大于组内差异，R 值接近 0 则表示组间和组内没有明显差异;此次统计分析的可信度用 P-value 表示，P&lt; 0.05 表示统计具有显著性。</div>
<h4>2. PERMANOVA/Adonis 置换多元方差分析</h4>
<div>
<div>PERMANOVA (Permutational multivariate analysis of variance，置换多元方差分析)，又称 Adonis 分析，可利用半度量(如 Bray-Curtis)或度量距离矩阵(如 Euclidean)对总方差进行分解，通过线性模型分析不同分组因素 或环境因子(如临床表型数据、土壤理化指标等)对样品差异的解释度，并使用置换检验进行显著性分析。</div>
<p><a href="http://www.biofacebook.com/wp-content/uploads/2020/03/6634703-73376c3dc4dfb540.png"><img class="aligncenter size-large wp-image-1478" src="http://www.biofacebook.com/wp-content/uploads/2020/03/6634703-73376c3dc4dfb540-1024x288.png" alt="6634703-73376c3dc4dfb540" width="640" height="180" /></a><a href="http://www.biofacebook.com/wp-content/uploads/2020/03/6634703-cf2b6e7dd53c0843.png"><img class="aligncenter size-full wp-image-1479" src="http://www.biofacebook.com/wp-content/uploads/2020/03/6634703-cf2b6e7dd53c0843.png" alt="6634703-cf2b6e7dd53c0843" width="850" height="715" /></a></div>
<div>
<h4>3. MRPP 多响应置换过程分析</h4>
<div>
<div>MRPP 分析(Multiple Response Permutation Procedure，MRPP)用来检验组间(两组或多组)的差异是否显著大于组内差异。与 ANOSIM 分析类似，可利用半度量(如 Bray-Curtis)或度量距离矩阵(如 Euclidean)计算 A 值表示组间差异，使用置换检验对分组进行显著性分析。</div>
<p><a href="http://www.biofacebook.com/wp-content/uploads/2020/03/6634703-296b421d3f08ae04.png"><img class="aligncenter size-large wp-image-1480" src="http://www.biofacebook.com/wp-content/uploads/2020/03/6634703-296b421d3f08ae04-1024x262.png" alt="6634703-296b421d3f08ae04" width="640" height="164" /></a></div>
</div>
</div>
</div>
]]></content:encoded>
			<wfw:commentRss>http://www.biofacebook.com/?feed=rss2&#038;p=1474</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Adonis与ANOSIM检验究竟是什么？(转贴）</title>
		<link>http://www.biofacebook.com/?p=1461</link>
		<comments>http://www.biofacebook.com/?p=1461#comments</comments>
		<pubDate>Fri, 27 Mar 2020 06:51:55 +0000</pubDate>
		<dc:creator><![CDATA[szypanther]]></dc:creator>
				<category><![CDATA[统计学习]]></category>

		<guid isPermaLink="false">http://www.biofacebook.com/?p=1461</guid>
		<description><![CDATA[<p>做微生物16S测序的时候，公司的报告里经常会给到两种检验Adonis和ANOSIM，听过t.test、wilicox、anova各种检验，那么Adonis和ANOSIM检验是什么呢</p> Adonis 多元方差分析 <p>Adonis，多元方差分析，亦可称为非参数多元方差分析。其原理是利用距离矩阵（比如基于Bray-Curtis距离、Euclidean距离）对总方差进行分解，分析不同分组因素对样品差异的解释度，并使用置换检验对其统计学意义进行显著性分析。</p> <p>Adonis分析结果通常如下：</p> Index Df SumsOfSqs MeanSqs F.Model R2 Pr(&#62;F) GroupFactor 4 1.0899 0.27248 1.4862 0.14883 0.011 Residuals 34 6.2335 0.18334 0.85117 Total 38 7.3234 1.00000 <p>其中，GroupFactor表示实验中的分组方法 Df表示自由度 SumsOfSqs表示总方差即离差平方和 MeanSqs表示均方差（SumsOfSeqs/Df） F.Model表示检验值F R2表示该分组方式对样品间差异的解释度，R2越大说明该分组方案对差异的解释度越高 Pr表示P值，小于0.05时显著说明本次检验的可信度高。</p> <p>那么Adonis具体要如何使用呢？ 在微生物的分析中我们通常把Adonis和PCA分析结合在一起。进行完PCA分析后，我们想要检验不同的分组之间究竟是否有差异，差异是否显著，这时候我们就可以用Adonis检验。如下图，虽然我们可以看到三组被分开了，但是这种分开真的显著吗？这种分组又能对样本的差异解释多少呢？那么右侧的Adonis检验就告诉了我们明确的答案，这种分组时显著的，R2=0.11。</p> <p></p> <p>Sylvain I A, Adams R I, Taylor J W. A different suite: The assemblage of distinct fungal [...]]]></description>
				<content:encoded><![CDATA[<p>做微生物16S测序的时候，公司的报告里经常会给到两种检验Adonis和ANOSIM，听过t.test、wilicox、anova各种检验，那么Adonis和ANOSIM检验是什么呢</p>
<div>
<div>
<h4>Adonis 多元方差分析</h4>
<p>Adonis，多元方差分析，亦可称为非参数多元方差分析。其原理是利用距离矩阵（比如基于Bray-Curtis距离、Euclidean距离）对总方差进行分解，<strong>分析不同分组因素对样品差异的解释度</strong>，并使用置换检验对其统计学意义进行<strong>显著性分析</strong>。</p>
<p>Adonis分析结果通常如下：</p>
<table>
<thead>
<tr>
<th>Index</th>
<th>Df</th>
<th>SumsOfSqs</th>
<th>MeanSqs</th>
<th>F.Model</th>
<th>R2</th>
<th>Pr(&gt;F)</th>
</tr>
</thead>
<tbody>
<tr>
<td>GroupFactor</td>
<td>4</td>
<td>1.0899</td>
<td>0.27248</td>
<td>1.4862</td>
<td>0.14883</td>
<td>0.011</td>
</tr>
<tr>
<td>Residuals</td>
<td>34</td>
<td>6.2335</td>
<td>0.18334</td>
<td>0.85117</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Total</td>
<td>38</td>
<td>7.3234</td>
<td>1.00000</td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
<p>其中，GroupFactor表示实验中的分组方法<br />
Df表示自由度<br />
SumsOfSqs表示总方差即离差平方和<br />
MeanSqs表示均方差（SumsOfSeqs/Df）<br />
F.Model表示检验值F<br />
R2表示该分组方式对样品间差异的解释度，R2越大说明该分组方案对差异的解释度越高<br />
Pr表示P值，小于0.05时显著说明本次检验的可信度高。</p>
<p><strong>那么Adonis具体要如何使用呢？</strong><br />
在微生物的分析中我们通常把Adonis和PCA分析结合在一起。进行完PCA分析后，我们想要检验不同的分组之间究竟是否有差异，差异是否显著，这时候我们就可以用Adonis检验。如下图，虽然我们可以看到三组被分开了，但是这种分开真的显著吗？这种分组又能对样本的差异解释多少呢？那么右侧的Adonis检验就告诉了我们明确的答案，这种分组时显著的，R2=0.11。</p>
<p><a href="http://www.biofacebook.com/wp-content/uploads/2020/03/8637066-0eb31addfe19bebe.png"><img class="aligncenter size-full wp-image-1465" src="http://www.biofacebook.com/wp-content/uploads/2020/03/8637066-0eb31addfe19bebe.png" alt="8637066-0eb31addfe19bebe" width="850" height="715" /></a></p>
<p>Sylvain I A, Adams R I, Taylor J W. A different suite: The assemblage of distinct fungal communities in water-damaged units of a poorly-maintained public housing building[J]. PloS one, 2019, 14(3): e0213355.</p>
<p>在R中我们可以使用Vegan包中的函数<code>adonis()</code>或<code>adonis2()</code>进行adonis检验。</p>
<div class="_2Uzcx_">
<pre class="line-numbers  language-kotlin"><code class="  language-kotlin"><span class="token function">adonis2</span><span class="token punctuation">(</span>formula<span class="token punctuation">,</span> <span class="token keyword">data</span><span class="token punctuation">,</span> permutations <span class="token operator">=</span> <span class="token number">999</span><span class="token punctuation">,</span> method <span class="token operator">=</span> <span class="token string">"bray"</span><span class="token punctuation">,</span>
    sqrt<span class="token punctuation">.</span>dist <span class="token operator">=</span> FALSE<span class="token punctuation">,</span> add <span class="token operator">=</span> FALSE<span class="token punctuation">,</span> <span class="token keyword">by</span> <span class="token operator">=</span> <span class="token string">"terms"</span><span class="token punctuation">,</span>
    parallel <span class="token operator">=</span> <span class="token function">getOption</span><span class="token punctuation">(</span><span class="token string">"mc.cores"</span><span class="token punctuation">)</span><span class="token punctuation">,</span> <span class="token operator">..</span><span class="token punctuation">.</span><span class="token punctuation">)</span>

<span class="token function">adonis</span><span class="token punctuation">(</span>formula<span class="token punctuation">,</span> <span class="token keyword">data</span><span class="token punctuation">,</span> permutations <span class="token operator">=</span> <span class="token number">999</span><span class="token punctuation">,</span> method <span class="token operator">=</span> <span class="token string">"bray"</span><span class="token punctuation">,</span>
    strata <span class="token operator">=</span> NULL<span class="token punctuation">,</span> contr<span class="token punctuation">.</span>unordered <span class="token operator">=</span> <span class="token string">"contr.sum"</span><span class="token punctuation">,</span>
    contr<span class="token punctuation">.</span>ordered <span class="token operator">=</span> <span class="token string">"contr.poly"</span><span class="token punctuation">,</span> parallel <span class="token operator">=</span> <span class="token function">getOption</span><span class="token punctuation">(</span><span class="token string">"mc.cores"</span><span class="token punctuation">)</span><span class="token punctuation">,</span> <span class="token operator">..</span><span class="token punctuation">.</span><span class="token punctuation">)</span>

</code></pre>
<div>
<div>
<h4>ANOSIM 相似性分析</h4>
<p>ANOSIM，相似性分析是一种非参数检验，<strong>用于检验高纬度数据间的相似性，比较组间和组内差异的大小</strong>，从而判断分组是否有意义，其可以用于检验两组的组间和组内差异，也可以用于多组。</p>
<p>ANOSIM的原理如下，以最基本的两个组为例：<br />
现在一共有6个样本，根据我们的实验方案将其分为两组Group1和Group2，每组含有3个样本。</p>
<p>1、首先我们基于组内样本间的距离计算组内的相似性。</p>
</div>
<p><a href="http://www.biofacebook.com/wp-content/uploads/2020/03/8637066-4c0d4a9ee6608c0d.png"><img class="aligncenter size-full wp-image-1466" src="http://www.biofacebook.com/wp-content/uploads/2020/03/8637066-4c0d4a9ee6608c0d.png" alt="8637066-4c0d4a9ee6608c0d" width="307" height="257" /></a></div>
<div>2、然后我们基于组间样本的距离计算组间的相似性。</div>
<div><a href="http://www.biofacebook.com/wp-content/uploads/2020/03/8637066-3b4b1374ba6a6743.png"><img class="aligncenter size-full wp-image-1467" src="http://www.biofacebook.com/wp-content/uploads/2020/03/8637066-3b4b1374ba6a6743.png" alt="8637066-3b4b1374ba6a6743" width="306" height="256" /></a></div>
<div>结合组内和组间，得到：</div>
<div><a href="http://www.biofacebook.com/wp-content/uploads/2020/03/8637066-8d5c1d0cd703fe77.png"><img class="aligncenter size-full wp-image-1468" src="http://www.biofacebook.com/wp-content/uploads/2020/03/8637066-8d5c1d0cd703fe77.png" alt="8637066-8d5c1d0cd703fe77" width="337" height="330" /></a></div>
<div>
<div class="image-package"></div>
<p>然后我们根据公式计算R值：</p>
<p><a href="http://www.biofacebook.com/wp-content/uploads/2020/03/8637066-b3c9bd5383539d8c.png"><img class="aligncenter size-full wp-image-1469" src="http://www.biofacebook.com/wp-content/uploads/2020/03/8637066-b3c9bd5383539d8c.png" alt="8637066-b3c9bd5383539d8c" width="855" height="363" /></a></p>
<div>
<div>
<p>其中，<br />
r0= mean rank of between group dissimilarities 即组间差异性秩的平均值<br />
rw= mean rank of within group dissimilarities 即组内差异性秩的平均值<br />
n=the number of samples 即样本总数量</p>
<p>因此根据公式可以知道，R的取值范围为[-1,1]：<br />
当R趋向于1时，说明组间差异大于组内差异<br />
当R=0时，说明组间没有差异，即分组无效，不同分组之间没有差异。<br />
当R趋向于-1时，说明组间差异小于组内差异。</p>
<p>当R大于0时，我们还要进一步检验这种差异是否显著具有可信度，ANOSIM中对其的检验方法也是使用Permutation Test即置换检验。</p>
<blockquote><p>置换检验：1、对原始样本进行随机分组，分为实验组和对照组<br />
2、计算随机分组的Ri值，并和R比较<br />
3、重复1000次<br />
4、计算p=Ri大于R的百分比，从而计算P值</p></blockquote>
<p>在我们做完PCoA、NMDS等降维分析的时候，我们也会遇到一同样的问题，数据看起来是分开的，但是不同的组之间差异真的显著吗？这个时候也可以选择ANOSIM进行检验。</p>
<p>R中Vegan包也提供了ANOSIM检验。下面用R中自带的鸢尾花数据集（iris）做一个示范：</p>
<pre class="line-numbers  language-csharp"><code class="  language-csharp"><span class="token function">library</span><span class="token punctuation">(</span>vegan<span class="token punctuation">)</span>
<span class="token function">library</span><span class="token punctuation">(</span>ggplot2<span class="token punctuation">)</span>
<span class="token preprocessor property">#Delete Species Infor</span>
dat<span class="token operator">&lt;</span><span class="token operator">-</span><span class="token function">subset</span><span class="token punctuation">(</span>iris<span class="token punctuation">,</span><span class="token keyword">select</span> <span class="token operator">=</span> <span class="token operator">-</span>Species<span class="token punctuation">)</span>
<span class="token preprocessor property">#Calculate Distance</span>
iris<span class="token punctuation">.</span>dist<span class="token operator">&lt;</span><span class="token operator">-</span><span class="token function">vegdist</span><span class="token punctuation">(</span>dat<span class="token punctuation">)</span>
<span class="token preprocessor property">#MDS analysis</span>
m<span class="token operator">&lt;</span><span class="token operator">-</span><span class="token function">monoMDS</span><span class="token punctuation">(</span>iris<span class="token punctuation">.</span>dist<span class="token punctuation">)</span>
MDS<span class="token operator">&lt;</span><span class="token operator">-</span><span class="token keyword">as</span><span class="token punctuation">.</span>data<span class="token punctuation">.</span><span class="token function">frame</span><span class="token punctuation">(</span>m$points<span class="token punctuation">)</span>
<span class="token preprocessor property">#Gain group information</span>
MDS$<span class="token keyword">group</span><span class="token operator">&lt;</span><span class="token operator">-</span>iris$Species
<span class="token preprocessor property">#Plot</span>
p<span class="token operator">&lt;</span><span class="token operator">-</span><span class="token function">ggplot</span><span class="token punctuation">(</span>MDS<span class="token punctuation">,</span><span class="token function">aes</span><span class="token punctuation">(</span>MDS1<span class="token punctuation">,</span>MDS2<span class="token punctuation">,</span>col<span class="token operator">=</span><span class="token keyword">group</span><span class="token punctuation">,</span>shape<span class="token operator">=</span><span class="token keyword">group</span><span class="token punctuation">)</span><span class="token punctuation">)</span><span class="token operator">+</span>
  <span class="token function">geom_point</span><span class="token punctuation">(</span><span class="token punctuation">)</span><span class="token operator">+</span>
  <span class="token function">theme_bw</span><span class="token punctuation">(</span><span class="token punctuation">)</span><span class="token operator">+</span>
  <span class="token function">theme</span><span class="token punctuation">(</span>legend<span class="token punctuation">.</span>title<span class="token operator">=</span><span class="token function">element_blank</span><span class="token punctuation">(</span><span class="token punctuation">)</span><span class="token punctuation">)</span>
<a href="http://www.biofacebook.com/wp-content/uploads/2020/03/8637066-6be283f0b46d2616.png"><img class="aligncenter size-large wp-image-1470" src="http://www.biofacebook.com/wp-content/uploads/2020/03/8637066-6be283f0b46d2616-1024x821.png" alt="8637066-6be283f0b46d2616" width="640" height="513" /></a>
从上图我们可以直观地看出，组间差异大于组内差异，三组样本明显可以分开。
 那么进一步我们用ANOSIM检验来验证我们从图中得到的结论。
</code></pre>
<pre class="line-numbers  language-bash"><code class="  language-bash">#ANOSIM
anosim_result&lt;-anosim(dat,iris$Species,permutations = 999)
summary(anosim_result)
plot(anosim_result, col = c('#FFD700','#FF7F00','#EE2C2C','#D02090'))
<a href="http://www.biofacebook.com/wp-content/uploads/2020/03/8637066-402a68df1c328084.png"><img class="aligncenter size-full wp-image-1471" src="http://www.biofacebook.com/wp-content/uploads/2020/03/8637066-402a68df1c328084.png" alt="8637066-402a68df1c328084" width="968" height="756" /></a>
<a href="http://www.biofacebook.com/wp-content/uploads/2020/03/8637066-8e1374dca1a6ff46.png"><img class="aligncenter size-large wp-image-1472" src="http://www.biofacebook.com/wp-content/uploads/2020/03/8637066-8e1374dca1a6ff46-1024x817.png" alt="8637066-8e1374dca1a6ff46" width="640" height="511" /></a>
从上图可以直观看到组间差异大于组内差异，R=0.858，接近于1，P值为0.001，小于0.05，说明该不同的分组之间差异明显，该分组是有意义的。</code></pre>
<pre class="line-numbers  language-csharp"></pre>
</div>
</div>
</div>
</div>
</div>
<p>作者：jlyq617<br />
链接：https://www.jianshu.com/p/dfa689f7cafd<br />
来源：简书</p>
</div>
]]></content:encoded>
			<wfw:commentRss>http://www.biofacebook.com/?feed=rss2&#038;p=1461</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Docker + Trojan + Caddy 部署 ( 转贴）</title>
		<link>http://www.biofacebook.com/?p=1455</link>
		<comments>http://www.biofacebook.com/?p=1455#comments</comments>
		<pubDate>Wed, 26 Feb 2020 13:43:27 +0000</pubDate>
		<dc:creator><![CDATA[szypanther]]></dc:creator>
				<category><![CDATA[docker]]></category>
		<category><![CDATA[服务器管理]]></category>

		<guid isPermaLink="false">http://www.biofacebook.com/?p=1455</guid>
		<description><![CDATA[<p>SourceURL:https://muguang.me/it/2757.html</p> Docker + Trojan + Caddy 部署 关于 Trojan，不要多问，问就是代理工具。它先进的地方在于，数据传输使用 TLS 协议，伪装成 HTTPS 请求。Trojan 服务端监听 443 端口，对于普通来路的请求，会交由 Web 服务器处理，返回 Web 网站；而对于 Trojan 客户端来的请求，则由 Trojan 服务端进行代理。这跟 某2ray + Websocket + TLS 原理是一样的，都是通过伪装流量，避免被提取特征或是被检测。 <p>这篇文章里，我将使用 Ubuntu 18.04 操作系统，使用 Caddy 作为 Web 服务器，将 Trojan 服务端和 Caddy 部署到 Docker 中。</p> 0、准备 域名 x1 国外服务器 x1 <p>部署前先给域名设置一条 A 记录，并指向你的服务器 IP。</p> 1、安装 Docker [...]]]></description>
				<content:encoded><![CDATA[<p>SourceURL:https://muguang.me/it/2757.html</p>
<div class="wrapper">
<article class="content">
<h1>Docker + Trojan + Caddy 部署</h1>
<div class="meta">关于 <a href="https://url.cn/5h1YHqK" target="_blank">Trojan</a>，不要多问，问就是代理工具。它先进的地方在于，数据传输使用 TLS 协议，伪装成 HTTPS 请求。Trojan 服务端监听 443 端口，对于普通来路的请求，会交由 Web 服务器处理，返回 Web 网站；而对于 Trojan 客户端来的请求，则由 Trojan 服务端进行代理。这跟 某2ray + Websocket + TLS 原理是一样的，都是通过伪装流量，避免被提取特征或是被检测。</div>
<div class="post">
<p>这篇文章里，我将使用 Ubuntu 18.04 操作系统，使用 Caddy 作为 Web 服务器，将 Trojan 服务端和 Caddy 部署到 Docker 中。</p>
<h3>0、准备</h3>
<ul>
<li>域名 x1</li>
<li>国外服务器 x1</li>
</ul>
<p>部署前先给域名设置一条 A 记录，并指向你的服务器 IP。</p>
<h3>1、安装 Docker &amp; Docker-Compose</h3>
<ul>
<li>添加 Docker 软件源
<div class="code-toolbar">
<pre class="line-numbers language-shell"><code class=" language-shell"><span class="token function">sudo</span> <span class="token function">apt-get</span> update
<span class="token function">sudo</span> <span class="token function">apt-get</span> <span class="token function">install</span> apt-transport-https ca-certificates  <span class="token function">curl</span> software-properties-common
<span class="token function">curl</span> -fsSL https://mirrors.ustc.edu.cn/docker-ce/linux/ubuntu/gpg <span class="token operator">|</span> <span class="token function">sudo</span> apt-key add -
<span class="token function">sudo</span> add-apt-repository <span class="token string">"deb [arch=amd64] https://mirrors.ustc.edu.cn/docker-ce/linux/ubuntu <span class="token variable"><span class="token variable">$(</span>lsb_release -cs<span class="token variable">)</span></span> stable"</span>
</code></pre>
<div class="shelter"></div>
<div class="toolbar">
<div class="toolbar-item">Shell</div>
<div class="toolbar-item"><button><i id="btn-copy-code" class="fontello fontello-tags"> Copy</i></button></div>
</div>
</div>
</li>
<li>安装 Docker CE（社区版）
<div class="code-toolbar">
<pre class="line-numbers language-shell"><code class=" language-shell"><span class="token function">sudo</span> <span class="token function">apt-get</span> update -y
<span class="token function">sudo</span> <span class="token function">apt-get</span> <span class="token function">install</span> docker-ce -y</code></pre>
<div class="shelter"></div>
<div class="toolbar">
<div class="toolbar-item">Shell</div>
<div class="toolbar-item"><button><i id="btn-copy-code" class="fontello fontello-tags"> Copy</i></button></div>
</div>
</div>
</li>
<li>安装 Docker-compose 容器编排工具
<div class="code-toolbar">
<pre class="line-numbers language-shell"><code class=" language-shell"><span class="token function">sudo</span> <span class="token function">curl</span> -L https://github.com/docker/compose/releases/download/1.18.0/docker-compose-<span class="token variable"><span class="token variable">`</span><span class="token function">uname</span> -s<span class="token variable">`</span></span>-<span class="token variable"><span class="token variable">`</span><span class="token function">uname</span> -m<span class="token variable">`</span></span> -o /usr/local/bin/docker-compose
<span class="token function">sudo</span> <span class="token function">chmod</span> +x /usr/local/bin/docker-compose
docker-compose --version</code></pre>
<div class="shelter"></div>
<div class="toolbar">
<div class="toolbar-item">Shell</div>
<div class="toolbar-item"><button><i id="btn-copy-code" class="fontello fontello-tags"> Copy</i></button></div>
</div>
</div>
</li>
</ul>
<h3>2、构建 Trojan + Caddy 镜像</h3>
<p>我已经写好了 Docker compose 的构建脚本，直接克隆下来用即可。</p>
<ul>
<li>下载 Trojan + Caddy 构建配置，见 <a href="https://github.com/FaithPatrick/trojan-caddy-docker-compose/" target="_blank">GitHub</a>
<div class="code-toolbar">
<pre class="line-numbers language-shell"><code class=" language-shell"><span class="token function">git</span> clone git@github.com:FaithPatrick/trojan-caddy-docker-compose.git trojan-caddy-docker-compose <span class="token operator">&amp;&amp;</span> <span class="token function">cd</span> trojan-caddy-docker-compose</code></pre>
<div class="shelter"></div>
<div class="toolbar">
<div class="toolbar-item">Shell</div>
<div class="toolbar-item"><button><i id="btn-copy-code" class="fontello fontello-tags"> Copy</i></button></div>
</div>
</div>
</li>
<li>编辑 <code>./caddy/Caddyfile</code>
<div class="code-toolbar">
<pre class="line-numbers language-caddyfile"><code class=" language-caddyfile">www.yourdomain.com:80 {
    root /usr/src/trojan
    log /usr/src/caddy.log
    index index.html
}

www.yourdomain.com:443 {
    root /usr/src/trojan
    log /usr/src/caddy.log
    index index.html
}</code></pre>
<div class="shelter"></div>
<div class="toolbar">
<div class="toolbar-item">Caddyfile</div>
<div class="toolbar-item"><button><i id="btn-copy-code" class="fontello fontello-tags"> Copy</i></button></div>
</div>
</div>
<p>将 <code>www.yourdomain.com</code> 替换成事先准备的域名。</li>
<li>编辑 <code>./trojan/config/config.json</code><br />
在 <code>config:json:8</code> 位置，将 <code>your_password</code> 替换成要设置的密码，这是客户端需要用到的密码，妥善保管。</p>
<p>在 <code>config:json:12-13</code> 位置将 <code>your_domain_name</code> 替换成自己的域名, 这个路径是 Caddy 自动调用 Let&#8217;s encrypt 生成的证书路径。</p>
<ul>
<li>执行 <code>docker-compose build</code> 构建镜像，如果没什么报错就代表成功了。</li>
</ul>
</li>
</ul>
<h3>3、运行 Trojan + Caddy 容器</h3>
<p>执行 <code>docker-compose up</code> 命令启动容器，正常启动的日志应该看起来是这样的：</p>
<div class="code-toolbar">
<pre class="line-numbers language-shell"><code class=" language-shell">➜ docker-compose up

Starting test_caddy_1 <span class="token punctuation">..</span>. <span class="token keyword">done</span>
Attaching to test_caddy_1
caddy_1  <span class="token operator">|</span> Activating privacy features<span class="token punctuation">..</span>.
caddy_1  <span class="token operator">|</span>
caddy_1  <span class="token operator">|</span> Your sites will be served over HTTPS automatically using Let<span class="token string">'s Encrypt.
caddy_1  | By continuing, you agree to the Let'</span>s Encrypt Subscriber Agreement at:
caddy_1  <span class="token operator">|</span>   https://letsencrypt.org/documents/LE-SA-v1.2-November-15-2017.pdf
caddy_1  <span class="token operator">|</span> Please enter your email address to signify agreement and to be notified
caddy_1  <span class="token operator">|</span> <span class="token keyword">in</span> <span class="token keyword">case</span> of issues. You can leave it blank, but we don't recommend it.
caddy_1  <span class="token operator">|</span>   Email address: 2020/01/10 13:54:58 <span class="token punctuation">[</span>INFO<span class="token punctuation">]</span> acme: Registering account <span class="token keyword">for</span>
caddy_1  <span class="token operator">|</span> 2020/01/10 13:54:58 <span class="token punctuation">[</span>INFO<span class="token punctuation">]</span><span class="token punctuation">[</span>xx.xx.xx<span class="token punctuation">]</span> acme: Obtaining bundled SAN certificate
caddy_1  <span class="token operator">|</span> 2020/01/10 13:54:58 <span class="token punctuation">[</span>INFO<span class="token punctuation">]</span><span class="token punctuation">[</span>xx.xx.xx<span class="token punctuation">]</span> AuthURL: https://acme-v02.api.letsencrypt.org/acme/authz-v3/xxxxxx
caddy_1  <span class="token operator">|</span> 2020/01/10 13:54:58 <span class="token punctuation">[</span>INFO<span class="token punctuation">]</span><span class="token punctuation">[</span>xx.xx.xx<span class="token punctuation">]</span> acme: Trying to solve HTTP-01
caddy_1  <span class="token operator">|</span> 2020/01/10 13:54:58 <span class="token punctuation">[</span>INFO<span class="token punctuation">]</span><span class="token punctuation">[</span>xx.xx.xx<span class="token punctuation">]</span> Served key authentication
caddy_1  <span class="token operator">|</span> 2020/01/10 13:54:58 <span class="token punctuation">[</span>INFO<span class="token punctuation">]</span><span class="token punctuation">[</span>xx.xx.xx<span class="token punctuation">]</span> Served key authentication
caddy_1  <span class="token operator">|</span> 2020/01/10 13:55:03 <span class="token punctuation">[</span>INFO<span class="token punctuation">]</span><span class="token punctuation">[</span>xx.xx.xx<span class="token punctuation">]</span> The server validated our request
caddy_1  <span class="token operator">|</span> 2020/01/10 13:55:03 <span class="token punctuation">[</span>INFO<span class="token punctuation">]</span><span class="token punctuation">[</span>xx.xx.xx<span class="token punctuation">]</span> acme: Validations succeeded<span class="token punctuation">;</span> requesting certificates
caddy_1  <span class="token operator">|</span> 2020/01/10 13:55:04 <span class="token punctuation">[</span>INFO<span class="token punctuation">]</span><span class="token punctuation">[</span>xx.xx.xx<span class="token punctuation">]</span> Server responded with a certificate.
caddy_1  <span class="token operator">|</span> 2020/01/10 13:55:04 <span class="token punctuation">[</span>INFO<span class="token punctuation">]</span><span class="token punctuation">[</span>xx.xx.xx<span class="token punctuation">]</span> Certificate written to disk: /root/.caddy/acme/acme-v02.api.letsencrypt.org/sites/xx.xx.xx/xx.xx.xx.crt
caddy_1  <span class="token operator">|</span> done.

Starting test_trojan_1 <span class="token punctuation">..</span>. <span class="token keyword">done</span>
Attaching to test_caddy_1, test_trojan_1
trojan_1  <span class="token operator">|</span> ssl certs is empty - checking<span class="token punctuation">..</span>.
trojan_1  <span class="token operator">|</span> ssl certs is empty - checking<span class="token punctuation">..</span>.
trojan_1  <span class="token operator">|</span> Welcome to trojan 1.14.0
trojan_1  <span class="token operator">|</span> <span class="token punctuation">[</span>2020-01-10 15:40:58<span class="token punctuation">]</span> <span class="token punctuation">[</span>WARN<span class="token punctuation">]</span> trojan <span class="token function">service</span> <span class="token punctuation">(</span>server<span class="token punctuation">)</span> started at 0.0.0.0:443</code></pre>
<div class="shelter"></div>
<div class="toolbar">
<div class="toolbar-item">Shell</div>
<div class="toolbar-item"><button><i id="btn-copy-code" class="fontello fontello-tags"> Copy</i></button></div>
</div>
</div>
<p>如果想常驻进程，启动容器时执行 <code>docker-compose up -d</code> 。</p>
<p>用浏览器访问 <code>https://域名</code> 看一下返回，我的构建脚本默认会返回 Bootstrap 的起始页面：</p>
<p><img src="https://muguang.me/usr/uploads/2020/01/1221849849.png" alt="20200111012712.png" /></p>
<p>网站存放在 <code>./wwwroot/trojan/</code> 目录下面 ，可以随便放点什么，尽可能像一个正常网站就好。</p>
<p>至此，Docker + Trojan + Caddy 服务端部署完成。</p>
<h3>4、Trojan 客户端</h3>
<p>不多提了，Windows 和 macOS 客户端官方仓库都有：<a href="https://url.cn/5r6mb9J" target="_blank">https://url.cn/5r6mb9J</a><br />
当然，你也可以去寻找更易用的支持 Trojan 的第三方客户端或路由器固件。</p>
</div>
</article>
</div>
]]></content:encoded>
			<wfw:commentRss>http://www.biofacebook.com/?feed=rss2&#038;p=1455</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
	</channel>
</rss>
