如果是打算自己從零開(kāi)始研發(fā),那難不難得看自己團(tuán)隊(duì)的技術(shù)水平,覺(jué)得有難度的話(huà),不如試一試和第三方開(kāi)發(fā)商合作,諸如有20年經(jīng)驗(yàn)的ZEGO即構(gòu)科技團(tuán)隊(duì),他們自主研發(fā)了實(shí)時(shí)語(yǔ)音SDK,實(shí)現(xiàn)語(yǔ)音聊天功能很簡(jiǎn)單的,直接接入SDK就可以了。
十載的甘州網(wǎng)站建設(shè)經(jīng)驗(yàn),針對(duì)設(shè)計(jì)、前端、開(kāi)發(fā)、售后、文案、推廣等六對(duì)一服務(wù),響應(yīng)快,48小時(shí)及時(shí)工作處理。營(yíng)銷(xiāo)型網(wǎng)站建設(shè)的優(yōu)勢(shì)是能夠根據(jù)用戶(hù)設(shè)備顯示端的尺寸不同,自動(dòng)調(diào)整甘州建站的顯示方式,使網(wǎng)站能夠適用不同顯示終端,在瀏覽器中調(diào)整網(wǎng)站的寬度,無(wú)論在任何一種瀏覽器上瀏覽網(wǎng)站,都能展現(xiàn)優(yōu)雅布局與設(shè)計(jì),從而大程度地提升瀏覽體驗(yàn)。創(chuàng)新互聯(lián)公司從事“甘州網(wǎng)站設(shè)計(jì)”,“甘州網(wǎng)站推廣”以來(lái),每個(gè)客戶(hù)項(xiàng)目都認(rèn)真落實(shí)執(zhí)行。
android語(yǔ)音,有可能是一些語(yǔ)音識(shí)別軟件,可以通過(guò)講話(huà)來(lái)開(kāi)啟一些應(yīng)用,還有一些類(lèi)似于微信的,可以講話(huà)、qq也是。
雖然視覺(jué)上的反饋通常是給用戶(hù)提供信息最快的方式,但這要求用戶(hù)把注意力設(shè)備上。當(dāng)用戶(hù)不能查看設(shè)備時(shí),則需要一些其他通信的方法。Android提供了強(qiáng)大的文字轉(zhuǎn)語(yǔ)音Text-to-Speech,TTS API。使開(kāi)發(fā)者能夠在應(yīng)用中添加語(yǔ)音通知和其他語(yǔ)音反饋功能,而不要求用戶(hù)看著屏幕。
下面的代碼展示了如何使用TTS API:
public class TextToSpeechDemo implements TextToSpeech.OnInitListener {
private final TextToSpeech mTextToSpeech;//TTS對(duì)象
private final ConcurrentLinkedQueue mBufferedMessages;//消息隊(duì)列
private Context mContext;
private boolean mIsReady;//標(biāo)識(shí)符
public TextToSpeechDemo(Context context){
this.mContext=context;//獲取上下文
this.mBufferedMessages=new ConcurrentLinkedQueue();//實(shí)例化隊(duì)列
this.mTextToSpeech=new TextToSpeech(this.mContext,this);//實(shí)例化TTS
}
//初始化TTS引擎
@Override
public void onInit(int status) {
Log.i("TextToSpeechDemo",String.valueOf(status));
if(status==TextToSpeech.SUCCESS){
int result = this.mTextToSpeech.setLanguage(Locale.CHINA);//設(shè)置識(shí)別語(yǔ)音為中文
synchronized (this){
this.mIsReady=true;//設(shè)置標(biāo)識(shí)符為true
for(String bufferedMessage : this.mBufferedMessages){
speakText(bufferedMessage);//讀語(yǔ)音
}
this.mBufferedMessages.clear();//讀完后清空隊(duì)列
}
}
}
//釋放資源
public void release(){
synchronized (this){
this.mTextToSpeech.shutdown();
this.mIsReady=false;
}
}
//更新消息隊(duì)列,或者讀語(yǔ)音
public void notifyNewMessage(String lanaugh){
String message=lanaugh;
synchronized (this){
if(this.mIsReady){
speakText(message);
}else{
this.mBufferedMessages.add(message);
}
}
}
//讀語(yǔ)音處理
private void speakText(String message){
Log.i("liyuanjinglyj",message);
HashMap params=new HashMap();
params.put(TextToSpeech.Engine.KEY_PARAM_STREAM,"STREAM_NOTIFICATION");//設(shè)置播放類(lèi)型(音頻流類(lèi)型)
this.mTextToSpeech.speak(message, TextToSpeech.QUEUE_ADD, params);//將這個(gè)發(fā)音任務(wù)添加當(dāng)前任務(wù)之后
this.mTextToSpeech.playSilence(100,TextToSpeech.QUEUE_ADD,params);//間隔多長(zhǎng)時(shí)間
}
}
當(dāng)然一般手機(jī)是不支持中文的可以百度下載訊飛TTS引擎后在測(cè)試。
由于TTS引擎的初始化是異步的,所以在執(zhí)行實(shí)際的文字轉(zhuǎn)語(yǔ)音之前需要把消息放到隊(duì)列中。
可以給TTS引擎發(fā)送多個(gè)參數(shù)。前面展示了如何決定口語(yǔ)消息使用的音頻流。在這種情況下,通知聲音也使用相同的音頻流。
最后,如果處理連續(xù)多個(gè)消息,最好在每個(gè)消息結(jié)束后暫停一會(huì)在播放下一個(gè)消息。這樣做會(huì)清楚的告訴用戶(hù)消息的結(jié)束和開(kāi)始。
android開(kāi)放實(shí)現(xiàn)語(yǔ)音通話(huà)最快的方式直接用現(xiàn)成SDK,推薦zego實(shí)時(shí)語(yǔ)音通話(huà)sdk.
以 2 人間的實(shí)時(shí)語(yǔ)音為例,主要流程如下:
android開(kāi)放實(shí)現(xiàn)語(yǔ)音通話(huà)最快的方式直接用現(xiàn)成SDK,可以試試ZEGO即構(gòu)科技的實(shí)時(shí)語(yǔ)音SDK,實(shí)現(xiàn)流程也比較便捷,通過(guò)四行代碼,三十分鐘就可以搭建聊天場(chǎng)景了
之前研究了基于UDP的文字傳輸 點(diǎn)擊打開(kāi)鏈接 ,以及Android端的語(yǔ)音錄制 點(diǎn)擊打開(kāi)鏈接 ,這篇文章就記錄一下Android端局域網(wǎng)內(nèi)的語(yǔ)音傳輸,簡(jiǎn)單的實(shí)現(xiàn)語(yǔ)音對(duì)講,當(dāng)然里面還存在著很多問(wèn)題,包括語(yǔ)音不清晰啊、雜音多啊,不管了,先聽(tīng)見(jiàn)聲音就行了。測(cè)試的時(shí)候兩部手機(jī),上圖:
程序?qū)懥藘蓚€(gè)線(xiàn)程,一個(gè)用于錄制AudioRecordThread,一個(gè)用于播放AudioTrackThread.
(一)錄制與發(fā)送
@Override
public void run() {
if (mSocket == null)
return;
try {
mStartTime = System.currentTimeMillis();
audioRec.startRecording();
while (flag) {
try {
byte[] bytes_pkg = buffer.clone();
if (mRecordQueue.size() = 2) {
int length = audioRec.read(buffer, 0, minBufferSize);
//獲取音量大小
mVolume = getAudioColum(buffer);
System.out.println(TAG + "= " + mVolume);
Message message = mHandler.obtainMessage();
message.arg1 = (int) mVolume;
mHandler.sendMessage(message);
DatagramPacket writePacket;
InetAddress inet = InetAddress.getByName(inetAddressName);
writePacket = new DatagramPacket(buffer, length, inet, PORT);
writePacket.setLength(length);
System.out.println("AudioRTwritePacket = " + writePacket.getData().toString());
mSocket.send(writePacket);
}
mRecordQueue.add(bytes_pkg);
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
audioRec.stop();
} catch (Exception e) {
e.printStackTrace();
}
}
里面包含了獲取音量大小,便于在頁(yè)面上面展示,方法參考了 點(diǎn)擊打開(kāi)鏈接
private double getAudioColum(byte[] buffer) {
double sumVolume = 0.0;
double avgVolume = 0.0;
double volume = 0.0;
for (int i = 0; i buffer.length; i += 2) {
int v1 = buffer[i] 0xFF;
int v2 = buffer[i + 1] 0xFF;
int temp = v1 + (v2 8);// 小端
if (temp = 0x8000) {
temp = 0xffff - temp;
}
sumVolume += Math.abs(temp);
}
avgVolume = sumVolume / buffer.length / 2;
volume = Math.log10(1 + avgVolume) * 10;
return volume;
}
(二)接收與播放
@Override
public void run() {
if (mSocket == null)
return;
//從文件流讀數(shù)據(jù)
audioTrk.play();
while (flag) {
DatagramPacket recevPacket;
try {
recevPacket = new DatagramPacket(buffer, 0, buffer.length);
mSocket.receive(recevPacket);
audioTrk.write(recevPacket.getData(), 0, recevPacket.getLength());
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
audioTrk.stop();
}
(三)主頁(yè)面 接收按鈕事件
@OnClick({R.id.btn_receive})
public void onViewClicked(View view) {
switch (view.getId()) {
case R.id.btn_receive:
if (btnReceive.getText().toString().equals("開(kāi)始接收")) {
btnReceive.setText("停止接收");
try {
if (audioTrackThread == null) {
audioTrackThread = new AudioTrackThread();
}
new Thread(audioTrackThread).start();
} catch (SocketException e) {
e.printStackTrace();
}
} else {
btnReceive.setText("開(kāi)始接收");
audioTrackThread.setFlag(false);
}
break;
}
}
(四)發(fā)送按鈕事件
ivSpeak.setOnTouchListener(new View.OnTouchListener() {
@RequiresApi(api = Build.VERSION_CODES.JELLY_BEAN)
@Override
public boolean onTouch(View v, MotionEvent event) {
switch (event.getAction()) {
case MotionEvent.ACTION_DOWN:
//按下按鈕開(kāi)始錄制
ivSpeak.setText("正在說(shuō)話(huà)");
//顯示錄音提示
relativeLayout.setVisibility(View.VISIBLE);
try {
if (audioRecordThread == null) {
audioRecordThread = new AudioRecordThread(handler);
}
audioRecordThread.setInetAddressName(tvReceiveIp.getText().toString());
audioRecordThread.setFlag(true);
new Thread(audioRecordThread).start();
} catch (SocketException e) {
e.printStackTrace();
}
break;
case MotionEvent.ACTION_UP:
case MotionEvent.ACTION_CANCEL:
//松開(kāi)按鈕結(jié)束錄制
ivSpeak.setText("按住說(shuō)話(huà)");
relativeLayout.setVisibility(View.GONE);
audioRecordThread.setFlag(false);
mStopTime = audioRecordThread.getmStopTime();
mStartTime = audioRecordThread.getmStartTime();
creatMessageBean((mStopTime - mStartTime) / 1000, true);
break;
}
return true;
}
});