Distributed Multi-sense Word Embedding (DMWE)
The DMWE tool is a parallelization of the Skip-Gram Mixture  algorithm on top of the DMTK parameter server. It provides an efficient "scaling to industry size" solution for multi sense word embedding.
The DMWE tool runs in the following manner:
On the client side (running on multiple nodes): three local training steps are executed repeatedly:
1. Get the latest parameters from the DMTK parameter server
2. Run the Skip-Gram Mixture algorithm  to generate updates to the current parameters
3. Send the parameter updates to the DMTK parameter server
On the server side, the DMTK parameter server acts as below:
1. Pack the requested parameters and send them to clients
2. Aggregate parameter updates from different clients and merge them into the global parameters
Word2vec  uses a single embedding vector for each word, which is not good enough to express the multiple meanings of polysemous words. To solve the problem, the Skip-Gram Mixture model was proposed to produce multiple embedding vectors for the polysemous words . However, computing the multiple vectors for words are computationally expensive, and thus we develop the Distributed Multi-sense Word Embedding (DMWE) tool, which is highly scalable and efficient. It can be used to train multi-sense embedding vectors on very large scale dataset. The training process is powered by the DMTK framework:
The DMTK parameter server stores the parameters in a distributed way, which means that each machine just holds a partition of the entire parameter set. This allows the entire embedding vector to be very large. For example, in our experiment on the ClueWeb data, the vocabulary size is 21 Million, and the number of parameters is up to over 2 billion.
The training process in the clients is conducted in a streaming manner and is automatically pipelined. Specifically, during the training, the data are processed block by block. For each block, the client software will go the three step as aforementioned. The parameter request and model training steps in successive data blocks are pipelined so as to hide the delay caused by the network communication. Furthermore, in this way, the clients just need to hold the parameters for several data blocks simultaneously, corresponding to very economic memory usage.
To download the source codes of DMWE, please run
$ git clone https://github.com/Microsoft/distributed_skipgram_mixture
Please note that DMWE is implemented in C++ for performance consideration.
DMWE is built on top of the DMTK parameter sever, therefore please download and build this project first.
- Open windows\distributed_skipgram_mixture\distributed_skipgram_mixture.sln using Visual Studio 2013. Add the necessary include path (for example, the path for DMTK multiverso) and lib path. Then build the solution.
For Ubuntu (Tested on Ubuntu 12.04)
- Download and build by running $ sh scripts/build.sh. Modify the include and lib path in Makefile. Then run $ make all -j4.
Training on a single machine
- Initialize the settings in the run.py according to your preference
- Run run.py in the solution directory
Training with distributed setting
- Create a host.txt file containing all the machines to be used for training
- Split your dataset into several parts and store them into the same directory of these machines
- Distribute the same executable file into the same directory of these machines
- Run the command line "smpd.exe -d -p port" in every machine
- Run run.py in one of the machines with host.txt as its argument
- Compile the library of the DMTK parameter server, by specifying the communication mode to be ZMQ
- Compile the project Multverso.Sever, and you will get the executable Multiverso.Sever.exe
- Prepare a configuration file end_points.txt to describe the sever endpoints
- Add a parameter setting in run.py, e.g.,'_endpoint_file=config.txt'
- Start Multiverso.Sever.exe in each sever machine with appropriate command line arguments (please use Multiverso.Sever.exe -help for further information)
- Execute run.py in one of the machines with end_points.txt as its argument
Algorithm configure for DMWE
For the Skip-Gram Mixture word embedding algorithms, we have provided hyperparemeters such as embedding size, number of polysemous words, number of senses and the others. You can specify their values in run.py
For the distributed training, users can configure the size of the data block, the mechanism for parameter update (such as ASP - Asynchronous Parallel, SSP - Stale Synchronous Parallel, BSP - Bulk Synchronous Parallel, and MA - Model Average), by setting the parameters in run.bat. For more details, please refer to the document of the DMTK parameter server.
The details of all the parameters in run.py are explained in parameters_setting.txt.
We report the performance of the DMWE tool on the English versions of Wiki2014  and Clueweb09 . The statistics* of these datasets and the performances of DMWE are given as follows. The experiments are run on 20 cores of Intel Xeon E5-2670 CPU on each machine.
|Dataset||Token#||Vocabulary size||Embedding dimension||Machine#||Training time / epoch (seconds)||Spearman's Rank Correlation on Word Similarity in Context|
* The dataset statistics are got after data preprocessing.
- For fair of comparision, Word2Vec is configured as Skip-Gram + Hierarchical Softmax. For DMWE, ASP was used as the mechanism for parameter update. The data block size is respectively set as 50k and 750k for Wiki2014 and Clueweb09.
- Given that the Clueweb09 dataset is very large, we only went through the data once during the training process (one training epoch). For the Wiki2014 dataset, the results were obtained by going through 20 epochs.
- The results clearly show that DMWE can achieve good speed up by leveraging the DMTK framework as compared to its single machine version.
 Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean.Efficient Estimation of Word Representations in Vector Space.In Proceedings of Workshop at ICLR, 2013.
 Tian, F., Dai, H., Bian, J., Gao, B., Zhang, R., Chen, E., & Liu, T. Y. (2014). A probabilistic model for learning multi-prototype word embeddings. In Proceedings of COLING (pp. 151-160).
 Huang E H, Socher R, Manning C D, et al. Improving word representations via global context and multiple word prototypes[C]//Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers-Volume 1. Association for Computational Linguistics, 2012: 873-882.