200字范文,内容丰富有趣,生活中的好帮手!
200字范文 > [Python人工智能] 三十四.Bert模型 (3)keras-bert库构建Bert模型实现微博情感分析

[Python人工智能] 三十四.Bert模型 (3)keras-bert库构建Bert模型实现微博情感分析

时间:2020-03-31 13:15:15

相关推荐

[Python人工智能] 三十四.Bert模型 (3)keras-bert库构建Bert模型实现微博情感分析

从本专栏开始,作者正式研究Python深度学习、神经网络及人工智能相关知识。前一篇文章开启了新的内容——Bert,首先介绍Keras-bert库安装及基础用法及文本分类工作。这篇文章将通过keras-bert库构建Bert模型,并实现微博情感分析。基础性文章,希望对您有所帮助!

这篇文章代码参考“山阴少年”大佬的博客,并结合自己的经验,对其代码进行了详细的复现和理解。希望对您有所帮助,尤其是初学者,也强烈推荐大家关注这位老师的文章。

NLP(三十五)使用keras-bert实现文本多分类任务/percent4/keras_bert_text_classification

微博情感预测结果如下所示:

原文: 《长津湖》这部电影真的非常好看,今天看完好开心,爱了爱了。强烈推荐大家,哈哈!!!预测标签: 喜悦原文: 听到这个消息真心难受,很伤心,怎么这么悲剧。保佑保佑,哭预测标签: 哀伤原文: 愤怒,我真的挺生气的,怒其不争,哀其不幸啊!预测标签: 愤怒

文章目录

一.Bert模型引入二.数据集介绍三.机器学习微博情感分析四.Bert模型微博情感分析1.模型训练2.模型评估3.模型预测五.总结

本专栏主要结合作者之前的博客、AI经验和相关视频及论文介绍,后面随着深入会讲解更多的Python人工智能案例及应用。基础性文章,希望对您有所帮助,如果文章中存在错误或不足之处,还请海涵!作者作为人工智能的菜鸟,希望大家能与我在这一笔一划的博客中成长起来。写了这么多年博客,尝试第一个付费专栏,为小宝赚点奶粉钱,但更多博客尤其基础性文章,还是会继续免费分享,该专栏也会用心撰写,望对得起读者。如果有问题随时私聊我,只望您能从这个系列中学到知识,一起加油喔~

Keras下载地址:/eastmountyxz/AI-for-KerasTensorFlow下载地址:/eastmountyxz/AI-for-TensorFlow

前文赏析:

[Python人工智能] 一.TensorFlow2.0环境搭建及神经网络入门[Python人工智能] 二.TensorFlow基础及一元直线预测案例[Python人工智能] 三.TensorFlow基础之Session、变量、传入值和激励函数[Python人工智能] 四.TensorFlow创建回归神经网络及Optimizer优化器[Python人工智能] 五.Tensorboard可视化基本用法及绘制整个神经网络[Python人工智能] 六.TensorFlow实现分类学习及MNIST手写体识别案例[Python人工智能] 七.什么是过拟合及dropout解决神经网络中的过拟合问题[Python人工智能] 八.卷积神经网络CNN原理详解及TensorFlow编写CNN[Python人工智能] 九.gensim词向量Word2Vec安装及《庆余年》中文短文本相似度计算[Python人工智能] 十.Tensorflow+Opencv实现CNN自定义图像分类案例及与机器学习KNN图像分类算法对比[Python人工智能] 十一.Tensorflow如何保存神经网络参数[Python人工智能] 十二.循环神经网络RNN和LSTM原理详解及TensorFlow编写RNN分类案例[Python人工智能] 十三.如何评价神经网络、loss曲线图绘制、图像分类案例的F值计算[Python人工智能] 十四.循环神经网络LSTM RNN回归案例之sin曲线预测[Python人工智能] 十五.无监督学习Autoencoder原理及聚类可视化案例详解[Python人工智能] 十六.Keras环境搭建、入门基础及回归神经网络案例[Python人工智能] 十七.Keras搭建分类神经网络及MNIST数字图像案例分析[Python人工智能] 十八.Keras搭建卷积神经网络及CNN原理详解

[Python人工智能] 十九.Keras搭建循环神经网络分类案例及RNN原理详解[Python人工智能] 二十.基于Keras+RNN的文本分类vs基于传统机器学习的文本分类[Python人工智能] 二十一.Word2Vec+CNN中文文本分类详解及与机器学习(RF\DTC\SVM\KNN\NB\LR)分类对比[Python人工智能] 二十二.基于大连理工情感词典的情感分析和情绪计算[Python人工智能] 二十三.基于机器学习和TFIDF的情感分类(含详细的NLP数据清洗)[Python人工智能] 二十四.易学智能GPU搭建Keras环境实现LSTM恶意URL请求分类[Python人工智能] 二十六.基于BiLSTM-CRF的医学命名实体识别研究(上)数据预处理[Python人工智能] 二十七.基于BiLSTM-CRF的医学命名实体识别研究(下)模型构建[Python人工智能] 二十八.Keras深度学习中文文本分类万字总结(CNN、TextCNN、LSTM、BiLSTM、BiLSTM+Attention)[Python人工智能] 二十九.什么是生成对抗网络GAN?基础原理和代码普及(1)[Python人工智能] 三十.Keras深度学习构建CNN识别阿拉伯手写文字图像[Python人工智能] 三十一.Keras实现BiLSTM微博情感分类和LDA主题挖掘分析[Python人工智能] 三十二.Bert模型 (1)Keras-bert基本用法及预训练模型[Python人工智能] 三十三.Bert模型 (2)keras-bert库构建Bert模型实现文本分类[Python人工智能] 三十四.Bert模型 (3)keras-bert库构建Bert模型实现微博情感分析

一.Bert模型引入

Bert模型的原理知识将在后面的文章介绍,主要结合结合谷歌论文和模型优势讲解。

BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding/pdf/1810.04805.pdf/google-research/bert

BERT(Bidirectional Encoder Representation from Transformers)是一个预训练的语言表征模型,是由谷歌AI团队在提出。该模型在机器阅读理解顶级水平测试SQuAD1.1中表现出惊人的成绩,并且在11种不同NLP测试中创出最佳成绩,包括将GLUE基准推至80.4%(绝对改进7.6%),MultiNLI准确度达到86.7% (绝对改进率5.6%)等。可以预见的是,BERT将为NLP带来里程碑式的改变,也是NLP领域近期最重要的进展。

Bert强调了不再像以往一样采用传统的单向语言模型或者把两个单向语言模型进行浅层拼接的方法进行预训练,而是采用新的masked language model(MLM),以致能生成深度的双向语言表征。其模型框架图如下所示,后面的文章再详细介绍,这里仅作引入,推荐读者阅读原文。

二.数据集介绍

数据描述:

数据示例:

注意,做到实验分析,作者才发现“厌恶-55267”和“低落-55267”的数据集完全相同,因此我们做三分类问题,更重要的是思想。

下载地址:

/eastmountyxz/Datasets-Text-Mining

参考链接:

/SophonPlus/ChineseNlpCorpus/blob/master/datasets/simplifyweibo_4_moods/intro.ipynb

三.机器学习微博情感分析

首先,我们介绍机器学习微博情感分析代码。

读取数据数据预处理(中文分词)TF-IDF计算分类模型构建预测及实验评估

完整代码如下:

# -*- coding: utf-8 -*-# -*- coding: utf-8 -*-"""Created on Mon Sep 27 22:21:53 @author: xiuzhang"""import jiebaimport pandas as pdimport numpy as npfrom collections import Counterfrom scipy.sparse import coo_matrixfrom sklearn import feature_extraction from sklearn.feature_extraction.text import TfidfVectorizerfrom sklearn.feature_extraction.text import CountVectorizerfrom sklearn.feature_extraction.text import TfidfTransformerfrom sklearn.model_selection import train_test_splitfrom sklearn.metrics import classification_reportfrom sklearn.linear_model import LogisticRegressionfrom sklearn.ensemble import RandomForestClassifierfrom sklearn.tree import DecisionTreeClassifierfrom sklearn import svmfrom sklearn import neighborsfrom sklearn.naive_bayes import MultinomialNBfrom sklearn.ensemble import AdaBoostClassifier#-----------------------------------------------------------------------------#读取数据train_path = 'data/weibo_3_moods_train.csv'test_path = 'data/weibo_3_moods_test.csv'types = {0: '喜悦', 1: '愤怒', 2: '哀伤'}pd_train = pd.read_csv(train_path)pd_test = pd.read_csv(test_path)print('训练集数目(总体):%d' % pd_train.shape[0])print('测试集数目(总体):%d' % pd_test.shape[0])#中文分词train_words = []test_words = []train_labels = []test_labels = []stopwords = ["[", "]", ")", "(", ")", "(", "【", "】", "!", ",", "$","·", "?", ".", "、", "-", "—", ":", ":", "《", "》", "=","。", "…", "“", "?", "”", "~", " ", "-", "+", "\\", "‘","~", ";", "’", "...", "..", "&", "#", "....", ",", "0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "10""的", "和", "之", "了", "哦", "那", "一个", ]for line in range(len(pd_train)):dict_label = pd_train['label'][line]dict_content = str(pd_train['content'][line]) #float=>str#print(dict_label,dict_content)cut_words = ""data = dict_content.strip("\n")data = data.replace(",", "") #一定要过滤符号 ","否则多列seg_list = jieba.cut(data, cut_all=False)for seg in seg_list:if seg not in stopwords:cut_words += seg + " "#print(cut_words)label = -1if dict_label=="喜悦":label = 0elif dict_label=="愤怒":label = 1elif dict_label=="哀伤":label = 2else:label = -1train_labels.append(label)train_words.append(cut_words)print(len(train_labels),len(train_words)) #209043 209043for line in range(len(pd_test)):dict_label = pd_test['label'][line]dict_content = str(pd_test['content'][line])cut_words = ""data = dict_content.strip("\n")data = data.replace(",", "")seg_list = jieba.cut(data, cut_all=False)for seg in seg_list:if seg not in stopwords:cut_words += seg + " "label = -1if dict_label=="喜悦":label = 0elif dict_label=="愤怒":label = 1elif dict_label=="哀伤":label = 2else:label = -1test_labels.append(label)test_words.append(cut_words)print(len(test_labels),len(test_words)) #97366 97366print(test_labels[:5]) #['喜悦', '喜悦', '愤怒', '哀伤', '喜悦']#-----------------------------------------------------------------------------#TFIDF计算#将文本中的词语转换为词频矩阵 矩阵元素a[i][j] 表示j词在i类文本下的词频vectorizer = CountVectorizer(min_df=100) #MemoryError控制参数#该类会统计每个词语的tf-idf权值transformer = TfidfTransformer()#第一个fit_transform是计算tf-idf 第二个fit_transform是将文本转为词频矩阵tfidf = transformer.fit_transform(vectorizer.fit_transform(train_words+test_words))for n in tfidf[:5]:print(n)print(type(tfidf))#获取词袋模型中的所有词语 word = vectorizer.get_feature_names()for n in word[:10]:print(n)print("单词数量:", len(word))#将tf-idf矩阵抽取 元素w[i][j]表示j词在i类文本中的tf-idf权重X = coo_matrix(tfidf, dtype=np.float32).toarray() #稀疏矩阵print(X.shape)print(X[:10])X_train = X[:len(train_labels)]X_test = X[len(train_labels):]y_train = train_labelsy_test = test_labelsprint(len(X_train),len(X_test),len(y_train),len(y_test))#-----------------------------------------------------------------------------#分类模型clf = MultinomialNB()#clf = svm.LinearSVC()#clf = LogisticRegression(solver='liblinear')#clf = RandomForestClassifier(n_estimators=10)#clf = neighbors.KNeighborsClassifier(n_neighbors=7)#clf = AdaBoostClassifier()clf.fit(X_train, y_train)print('模型的准确度:{}'.format(clf.score(X_test, y_test)))pre = clf.predict(X_test)print("分类")print(len(pre), len(y_test))print(classification_report(y_test, pre, digits=4))

输出结果如下所示:

训练集数目(总体):209043测试集数目(总体):97366Building prefix dict from the default dictionary ...Dumping model to file cache C:\Users\xdtech\AppData\Local\Temp\jieba.cacheLoading model cost 0.885 seconds.Prefix dict has been built succesfully.<class 'scipy.sparse.csr.csr_matrix'>单词数量: 6997(306409, 6997)[[0. 0. 0. ... 0. 0. 0.][0. 0. 0. ... 0. 0. 0.][0. 0. 0. ... 0. 0. 0.]...[0. 0. 0. ... 0. 0. 0.][0. 0. 0. ... 0. 0. 0.][0. 0. 0. ... 0. 0. 0.]]209043 97366 209043 97366模型的准确度:0.6670398290984533分类97366 97366precision recall f1-score support00.6666 0.9833 0.79456145310.6365 0.1184 0.19971746120.7071 0.1330 0.224018452avg / total0.6689 0.6670 0.579797366

四.Bert模型微博情感分析

模型框架如下图所示:

1.模型训练

blog34_kerasbert_train.py

代码如下:

# -*- coding: utf-8 -*-"""Created on Wed Nov 24 00:09:48 @author: xiuzhang"""import jsonimport codecsimport pandas as pdfrom keras_bert import load_trained_model_from_checkpoint, Tokenizerfrom keras.layers import *from keras.models import Modelfrom keras.optimizers import Adamimport osimport tensorflow as tfos.environ["CUDA_DEVICES_ORDER"] = "PCI_BUS_IS"os.environ["CUDA_VISIBLE_DEVICES"] = "0"#指定了每个GPU进程中使用显存的上限,0.9表示可以使用GPU 90%的资源进行训练gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.9)sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options))maxlen = 300BATCH_SIZE = 8config_path = 'chinese_L-12_H-768_A-12/bert_config.json'checkpoint_path = 'chinese_L-12_H-768_A-12/bert_model.ckpt'dict_path = 'chinese_L-12_H-768_A-12/vocab.txt'#读取vocab词典token_dict = {}with codecs.open(dict_path, 'r', 'utf-8') as reader:for line in reader:token = line.strip()token_dict[token] = len(token_dict)#------------------------------------------类函数定义--------------------------------------#词典中添加否则Unknownclass OurTokenizer(Tokenizer):def _tokenize(self, text):R = []for c in text:if c in self._token_dict:R.append(c)else:R.append('[UNK]') #剩余的字符是[UNK]return Rtokenizer = OurTokenizer(token_dict)#数据填充def seq_padding(X, padding=0):L = [len(x) for x in X]ML = max(L)return np.array([np.concatenate([x, [padding] * (ML - len(x))]) if len(x) < ML else x for x in X])class DataGenerator:def __init__(self, data, batch_size=BATCH_SIZE):self.data = dataself.batch_size = batch_sizeself.steps = len(self.data) // self.batch_sizeif len(self.data) % self.batch_size != 0:self.steps += 1def __len__(self):return self.stepsdef __iter__(self):while True:idxs = list(range(len(self.data)))np.random.shuffle(idxs)X1, X2, Y = [], [], []for i in idxs:d = self.data[i]text = d[0][:maxlen]x1, x2 = tokenizer.encode(first=text)y = d[1]X1.append(x1)X2.append(x2)Y.append(y)if len(X1) == self.batch_size or i == idxs[-1]:X1 = seq_padding(X1)X2 = seq_padding(X2)Y = seq_padding(Y)yield [X1, X2], Y[X1, X2, Y] = [], [], []#构建模型def create_cls_model(num_labels):bert_model = load_trained_model_from_checkpoint(config_path, checkpoint_path, seq_len=None)for layer in bert_model.layers:layer.trainable = Truex1_in = Input(shape=(None,))x2_in = Input(shape=(None,))x = bert_model([x1_in, x2_in])cls_layer = Lambda(lambda x: x[:, 0])(x)#取出[CLS]对应的向量用来做分类p = Dense(num_labels, activation='softmax')(cls_layer) #多分类model = Model([x1_in, x2_in], p)pile(loss='categorical_crossentropy',optimizer=Adam(1e-5),metrics=['accuracy'])model.summary()return model#------------------------------------------主函数-----------------------------------------if __name__ == '__main__':#数据预处理train_df = pd.read_csv("data/weibo_3_moods_train.csv").fillna(value="")test_df = pd.read_csv("data/weibo_3_moods_test.csv").fillna(value="")print("begin data processing...")labels = train_df["label"].unique()print(labels)with open("label.json", "w", encoding="utf-8") as f:f.write(json.dumps(dict(zip(range(len(labels)), labels)), ensure_ascii=False, indent=2))train_data = []test_data = []for i in range(train_df.shape[0]):label, content = train_df.iloc[i, :]label_id = [0] * len(labels)for j, _ in enumerate(labels):if _ == label:label_id[j] = 1train_data.append((content, label_id))print(train_data[0])for i in range(test_df.shape[0]):label, content = test_df.iloc[i, :]label_id = [0] * len(labels)for j, _ in enumerate(labels):if _ == label:label_id[j] = 1test_data.append((content, label_id))print(len(train_data),len(test_data))print("finish data processing!\n")#模型训练model = create_cls_model(len(labels))train_D = DataGenerator(train_data)test_D = DataGenerator(test_data)print("begin model training...")print(len(train_D), len(test_D)) #26131 12171model.fit_generator(train_D.__iter__(),steps_per_epoch=len(train_D),epochs=10,validation_data=test_D.__iter__(),validation_steps=len(test_D))print("finish model training!")#模型保存model.save('cls_mood.h5')print("Model saved!")result = model.evaluate_generator(test_D.__iter__(), steps=len(test_D))print("模型评估结果:", result)

模型的架构如下图所示,本文调用GPU实现。

[‘哀伤’ ‘喜悦’ ‘愤怒’]209043 97366

训练结果如下:

Epoch 1/315000/15000 [==============================] - 3561s 237ms/step - loss: 0.6973 - acc: 0.6974 - val_loss: 1.2818 - val_acc: 0.6068Epoch 2/315000/15000 [==============================] - 3544s 236ms/step - loss: 0.5900 - acc: 0.7523 - val_loss: 1.5190 - val_acc: 0.6007Epoch 3/315000/15000 [==============================] - 3545s 236ms/step - loss: 0.4615 - acc: 0.8137 - val_loss: 1.6390 - val_acc: 0.5981finish model training!Model saved!

如下图所示,输出训练模型h5,约2GB大小。

最终输出结果如下:

模型评估结果: [1.6390499637700617, 0.5981]

问题:单步太慢,整个训练花费了3小时

If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

原因就是GPU的使用率太高了,显存不足,将batch_size调小,占比90%。至今没找到好的解决方法。

train_D = DataGenerator(train_data)

26131test_D = DataGenerator(test_data)

12171

注意前面的batch_size会控制我们的批次,比如修改为32后

6533 3043

LSTM训练核心代码如下:

2.模型评估

blog34_kerasbert_evaluate.py

# -*- coding: utf-8 -*-"""Created on Thu Nov 25 00:09:02 @author: xiuzhang引用:/percent4/keras_bert_text_classification"""import jsonimport numpy as npimport pandas as pdfrom keras.models import load_modelfrom keras_bert import get_custom_objectsfrom sklearn.metrics import classification_reportfrom blog34_kerasbert_train import token_dict, OurTokenizermaxlen = 300#加载训练好的模型model = load_model("cls_mood.h5", custom_objects=get_custom_objects())tokenizer = OurTokenizer(token_dict)with open("label.json", "r", encoding="utf-8") as f:label_dict = json.loads(f.read())#单句预测def predict_single_text(text):text = text[:maxlen]x1, x2 = tokenizer.encode(first=text) #BERT TokenizeX1 = x1 + [0] * (maxlen - len(x1)) if len(x1) < maxlen else x1X2 = x2 + [0] * (maxlen - len(x2)) if len(x2) < maxlen else x2#print(X1,X2)#模型预测predicted = model.predict([[X1], [X2]])y = np.argmax(predicted[0])return label_dict[str(y)]#模型评估def evaluate():test_df = pd.read_csv("data/weibo_3_moods_test.csv").fillna(value="")true_y_list, pred_y_list = [], []for i in range(test_df.shape[0]):true_y, content = test_df.iloc[i, :]pred_y = predict_single_text(content)print("predict %d samples" % (i+1))print(true_y,pred_y)true_y_list.append(true_y)pred_y_list.append(pred_y)return classification_report(true_y_list, pred_y_list, digits=4)#------------------------------------模型评估---------------------------------output_data = evaluate()print("model evaluate result:\n")print(output_data)

输出结果如下所示:

这预测结果低得可怕,哈哈!可能和我数据集标注有关,好处是数据预测比较分散,而不是某个类别极高,其模型迁移效果如何呢?读者可以尝试,尤其是在质量更高的数据集上实验。

输出结果如下:

model evaluate result:precision recall f1-score support哀伤0.4162 0.4301 0.423018452喜悦0.7957 0.4244 0.553561453愤怒0.2629 0.6854 0.380017461avg / total0.6282 0.4723 0.497797366

3.模型预测

blog34_kerasbert_predict.py

# -*- coding: utf-8 -*-"""Created on Thu Nov 25 00:10:06 @author: xiuzhang引用:/percent4/keras_bert_text_classification"""import timeimport jsonimport numpy as npfrom blog34_kerasbert_train import token_dict, OurTokenizerfrom keras.models import load_modelfrom keras_bert import get_custom_objectsmaxlen = 256s_time = time.time()#加载训练好的模型model = load_model("cls_mood.h5", custom_objects=get_custom_objects())tokenizer = OurTokenizer(token_dict)with open("label.json", "r", encoding="utf-8") as f:label_dict = json.loads(f.read())#预测示例语句text = "《长津湖》这部电影真的非常好看,今天看完好开心,爱了爱了。强烈推荐大家,哈哈!!!"#text = "听到这个消息真心难受,很伤心,怎么这么悲剧。保佑保佑,哭"#text = "愤怒,我真的挺生气的,怒其不争,哀其不幸啊!"#Tokenizetext = text[:maxlen]x1, x2 = tokenizer.encode(first=text)X1 = x1 + [0] * (maxlen-len(x1)) if len(x1) < maxlen else x1X2 = x2 + [0] * (maxlen-len(x2)) if len(x2) < maxlen else x2#模型预测predicted = model.predict([[X1], [X2]])y = np.argmax(predicted[0])e_time = time.time()print("原文: %s" % text)print("预测标签: %s" % label_dict[str(y)])print("Cost time:", e_time-s_time)

输出结果如下所示,可以看到准确对三种类型的评价进行预测。

原文: 《长津湖》这部电影真的非常好看,今天看完好开心,爱了爱了。强烈推荐大家,哈哈!!!预测标签: 喜悦原文: 听到这个消息真心难受,很伤心,怎么这么悲剧。保佑保佑,哭预测标签: 哀伤原文: 愤怒,我真的挺生气的,怒其不争,哀其不幸啊!预测标签: 愤怒

五.总结

写到这里,这篇文章就介绍结束了,后面还会持续分享,包括Bert实现命名实体识别及原理知识。真心希望这篇文章对您有所帮助,加油~

一.Bert模型引入二.数据集介绍三.机器学习微博情感分析四.Bert模型微博情感分析

1.模型训练

2.模型评估

3.模型预测

下载地址:

/eastmountyxz/AI-for-Keras/eastmountyxz/AI-for-TensorFlow

(By:Eastmount -12-06 夜于武汉 /eastmount/ )

参考文献:

[1] /google-research/bert[2] /bert_models/_11_03/chinese_L-12_H-768_A-12.zip[3] /percent4/keras_bert_text_classification[4] /huanghao128/bert_example[5] 如何评价 BERT 模型? - 知乎[6] 【NLP】Google BERT模型原理详解 - 李rumor[7] NLP(三十五)使用keras-bert实现文本多分类任务 - 山阴少年[8] tensorflow2+keras简单实现BERT模型 - 小黄[9] NLP必读 | 十分钟读懂谷歌BERT模型 - 奇点机智[10] BERT模型的详细介绍 - IT小佬[11] [深度学习] 自然语言处理— 基于Keras Bert使用(上)- 天空是很蓝[12] /CyberZHG/keras-bert[13] /bojone/bert4keras[14] /ymcui/Chinese-BERT-wwm[15] 用深度学习做命名实体识别(六)-BERT介绍 - 涤生[16] /qq_36949278/article/details/117637187

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。