365体育网址下先对上次之板采集过程遭到起的bug进行一下概括的修改。并宣读写文章频wav文件。

.. .-..—…-. -.—–..- (“I love you
”莫斯电码),这是逛知乎的时来看程序员的剖白内容书,感觉我们码农情商好大啊!哈哈,好了开搬砖。

https://rustfisher.github.io/2018/02/24/Android\_note/Android-audio\_AudioRecord\_AudioTrack\_pcm\_wav/

达到等同首介绍了节奏的集过程。之后产品经营找我开口了话语,表示功效和界面都聚集!但是(听到“但是”表示头皮发麻),需要再次加一个效应,就是节奏在录制的过程中,可以暂停,并且可以去除到上次顿的地方(此刻心亿万峰草泥马飞奔而过,官大一级压异常人什么!)。下面先对上次之节拍采集过程遭到起的bug进行一下简易的改动。

  • win7
  • Android Studio 3.0.1

平、音频的符号操作
上次的操作标记的倒速度是定位的,也不怕是每次surfaceView进行绘图(每隔20ms)时,标记点位向左平移3个像素,这就导致标记点位的动速度跟波形图不能够形成相同速度之移位,由于刷新的频率比较高,所以标记点在以眼睛的可见的偏移量偏离标记的岗位。

正文目的:使用 AudioRecord 和 AudioTrack
完成音频PCM数据的集与播放,并宣读写文章频wav文件

顿时就是格外窘迫了。做出的改就是于,计算录音采集的字节数及总的画布长度的时,计算每次移除list集合的字节数,再展开标记点位移。

未雨绸缪干活

Android提供了AudioRecord和MediaRecord。MediaRecord可选取录音的格式。
AudioRecord得到PCM编码格式的数。AudioRecord能够设置模拟信号转化为数字信号的系参数,包括采样率和量化深度,同时为囊括通道数目等。

脚是源码截图,只待加一个参数即可:

PCM

PCM是当由模拟信号向数字信号转化的如出一辙栽常用之编码格式,称为脉冲编码调制,PCM将模拟信号按照一定之区间划分也多截,然后通过二向前制去量化每一个间距的强度。
PCM表示的是音频文件中趁着时空的流逝的同等段子音频的振幅。Android在WAV文件被支持PCM的旋律数据。

此处描绘图片描述

WAV

WAV,MP3等较宽泛的音频格式,不同的编码格式对许不通过之原始音频。为了方便传输,通常会缩减原始音频。
为鉴别出音频格式,每种格式来特定的头文件(header)。
WAV以RIFF为业内。RIFF是千篇一律栽资源置换档案标准。RIFF将文件存储于各一个标记块被。
主干组成单位凡trunk,每个trunk由标记位,数据大小,数据存储,三个组成部分组成。

此地描绘图片描述

PCM打包成WAV

PCM是原始音频数据,WAV是windows中常见的音频格式,只是在pcm数据中上加了一个文件头。

起始地址 占用空间 本地址数字的含义
00H 4byte RIFF,资源交换文件标志。
04H 4byte 从下一个地址开始到文件尾的总字节数。高位字节在后面,这里就是001437ECH,换成十进制是1325036byte,算上这之前的8byte就正好1325044byte了。
08H 4byte WAVE,代表wav文件格式。
0CH 4byte FMT ,波形格式标志
10H 4byte 00000010H,16PCM,我的理解是用16bit的数据表示一个量化结果。
14H 2byte 为1时表示线性PCM编码,大于1时表示有压缩的编码。这里是0001H。
16H 2byte 1为单声道,2为双声道,这里是0001H。
18H 4byte 采样频率,这里是00002B11H,也就是11025Hz。
1CH 4byte Byte率=采样频率*音频通道数*每次采样得到的样本位数/8,00005622H,也就是22050Byte/s=11025*1*16/2
20H 2byte 块对齐=通道数*每次采样得到的样本位数/8,0002H,也就是 2 == 1*16/8
22H 2byte 样本数据位数,0010H即16,一个量化样本占2byte。
24H 4byte data,一个标志而已。
28H 4byte Wav文件实际音频数据所占的大小,这里是001437C8H即1325000,再加上2CH就正好是1325044,整个文件的大小。
2CH 不定 量化数据

亚、音频采集的回删操作,下面是迟早下来的界面:

AudioRecord

AudioRecord可实习从旋律输入设备记录声之效益。得到PCM格式的韵律。
读取音频的办法来read(byte[], int, int)read(short[], int, int)
read(ByteBuffer, int)
不过根据存储方同急需选择使用这项措施。

要权限<uses-permission android:name="android.permission.RECORD_AUDIO" />

重大的反吗只是当点子采集的下面加了一个小跑的线形,每次暂停的上,会当当斯漫长直达画画一个分隔线,删除的时候,条为右侧走,

AudioRecord 构造函数

public AudioRecord(int audioSource, int sampleRateInHz, int channelConfig, int audioFormat, int bufferSizeInBytes)

  • audioSource 音源设备,常因此话筒MediaRecorder.AudioSource.MIC
  • samplerateInHz 采样频率,44100Hz凡是当前有装备都支持之频率
  • channelConfig 音频通道,单声道还是立体声
  • audioFormat 该参数为量化深度,即为每次采样的位数
  • bufferSizeInBytes
    可通过getMinBufferSize()道确定,每次打硬件读取数据所用之缓冲区的高低。

录制的下,条形是向左平移的,它的移动速度和方的时光刻度修凡相同的。

获取wav文件

而要博取wav文件,需要以PCM基础及搭一个header。可以将PCM文件转换成wav,这里提供相同栽PCM与wav几乎与此同时转的思路。

PCM与wav同时创建,给wav文件一个默认的header。录制线程启动后,同时写PCM与wav。
录制完成时,重新生成header,利用RandomAccessFile修改wav文件的header。

回删的操作,其实是在录制好后,剪辑形成统一形成的,所以恳请看下面。

AudioTrack

使用AudioTrack广播音频。初始化AudioTrack时,要依据录制时之参数进行设定。

此描绘图片描述

代码示例

工具类WindEar心想事成音频PCM数据的采和播放,与读写文章频wav文件的效应。

  • AudioRecordThread
    使用AudioRecord录制PCM文件,可卜而生成wav文件
  • AudioTrackPlayThread 使用AudioTrack播放PCM或wav音频文件的线程
  • WindState 表示即状态,例如是否以播报,录制等等

PCM文件的读写采用FileOutputStreamFileInputStream

generateWavFileHeader方式好生成wav文件的header

/**
 * 音频录制器
 * 使用 AudioRecord 和 AudioTrack API 完成音频 PCM 数据的采集和播放,并实现读写音频 wav 文件
 * 检查权限,检查麦克风的工作放在Activity中进行
 * Created by Rust on 2018/2/24.
 */
public class WindEar {
    private static final String TAG = "rustApp";
    private static final String TMP_FOLDER_NAME = "AnWindEar";
    private static final int RECORD_AUDIO_BUFFER_TIMES = 1;
    private static final int PLAY_AUDIO_BUFFER_TIMES = 1;
    private static final int AUDIO_FREQUENCY = 44100;

    private static final int RECORD_CHANNEL_CONFIG = AudioFormat.CHANNEL_IN_STEREO;
    private static final int PLAY_CHANNEL_CONFIG = AudioFormat.CHANNEL_OUT_STEREO;
    private static final int AUDIO_ENCODING = AudioFormat.ENCODING_PCM_16BIT;

    private AudioRecordThread aRecordThread;           // 录制线程
    private volatile WindState state = WindState.IDLE; // 当前状态
    private File tmpPCMFile = null;
    private File tmpWavFile = null;
    private OnState onStateListener;
    private Handler mainHandler = new Handler(Looper.getMainLooper());

    /**
     * PCM缓存目录
     */
    private static String cachePCMFolder;

    /**
     * wav缓存目录
     */
    private static String wavFolderPath;

    private static WindEar instance = new WindEar();

    private WindEar() {

    }

    public static WindEar getInstance() {
        if (null == instance) {
            instance = new WindEar();
        }
        return instance;
    }

    public void setOnStateListener(OnState onStateListener) {
        this.onStateListener = onStateListener;
    }

    /**
     * 初始化目录
     */
    public static void init(Context context) {
        // 存储在App内或SD卡上
//        cachePCMFolder = context.getFilesDir().getAbsolutePath() + File.separator + TMP_FOLDER_NAME;
        cachePCMFolder = Environment.getExternalStorageDirectory().getAbsolutePath() + File.separator
                + TMP_FOLDER_NAME;

        File folder = new File(cachePCMFolder);
        if (!folder.exists()) {
            boolean f = folder.mkdirs();
            Log.d(TAG, String.format(Locale.CHINA, "PCM目录:%s -> %b", cachePCMFolder, f));
        } else {
            for (File f : folder.listFiles()) {
                boolean d = f.delete();
                Log.d(TAG, String.format(Locale.CHINA, "删除PCM文件:%s %b", f.getName(), d));
            }
            Log.d(TAG, String.format(Locale.CHINA, "PCM目录:%s", cachePCMFolder));
        }

        wavFolderPath = Environment.getExternalStorageDirectory().getAbsolutePath() + File.separator
                + TMP_FOLDER_NAME;
//        wavFolderPath = context.getFilesDir().getAbsolutePath() + File.separator + TMP_FOLDER_NAME;
        File wavDir = new File(wavFolderPath);
        if (!wavDir.exists()) {
            boolean w = wavDir.mkdirs();
            Log.d(TAG, String.format(Locale.CHINA, "wav目录:%s -> %b", wavFolderPath, w));
        } else {
            Log.d(TAG, String.format(Locale.CHINA, "wav目录:%s", wavFolderPath));
        }
    }

    /**
     * 开始录制音频
     */
    public synchronized void startRecord(boolean createWav) {
        if (!state.equals(WindState.IDLE)) {
            Log.w(TAG, "无法开始录制,当前状态为 " + state);
            return;
        }
        try {
            tmpPCMFile = File.createTempFile("recording", ".pcm", new File(cachePCMFolder));
            if (createWav) {
                SimpleDateFormat sdf = new SimpleDateFormat("yyMMdd_HHmmss", Locale.CHINA);
                tmpWavFile = new File(wavFolderPath + File.separator + "r" + sdf.format(new Date()) + ".wav");
            }
            Log.d(TAG, "tmp file " + tmpPCMFile.getName());
        } catch (IOException e) {
            e.printStackTrace();
        }
        if (null != aRecordThread) {
            aRecordThread.interrupt();
            aRecordThread = null;
        }
        aRecordThread = new AudioRecordThread(createWav);
        aRecordThread.start();
    }

    public synchronized void stopRecord() {
        if (!state.equals(WindState.RECORDING)) {
            return;
        }
        state = WindState.STOP_RECORD;
        notifyState(state);
    }

    /**
     * 播放录制好的PCM文件
     */
    public synchronized void startPlayPCM() {
        if (!isIdle()) {
            return;
        }
        new AudioTrackPlayThread(tmpPCMFile).start();
    }

    /**
     * 播放录制好的wav文件
     */
    public synchronized void startPlayWav() {
        if (!isIdle()) {
            return;
        }
        new AudioTrackPlayThread(tmpWavFile).start();
    }

    public synchronized void stopPlay() {
        if (!state.equals(WindState.PLAYING)) {
            return;
        }
        state = WindState.STOP_PLAY;
    }

    public synchronized boolean isIdle() {
        return WindState.IDLE.equals(state);
    }

    /**
     * 音频录制线程
     * 使用FileOutputStream来写文件
     */
    private class AudioRecordThread extends Thread {
        AudioRecord aRecord;
        int bufferSize = 10240;
        boolean createWav = false;

        AudioRecordThread(boolean createWav) {
            this.createWav = createWav;
            bufferSize = AudioRecord.getMinBufferSize(AUDIO_FREQUENCY,
                    RECORD_CHANNEL_CONFIG, AUDIO_ENCODING) * RECORD_AUDIO_BUFFER_TIMES;
            Log.d(TAG, "record buffer size = " + bufferSize);
            aRecord = new AudioRecord(MediaRecorder.AudioSource.MIC, AUDIO_FREQUENCY,
                    RECORD_CHANNEL_CONFIG, AUDIO_ENCODING, bufferSize);
        }

        @Override
        public void run() {
            state = WindState.RECORDING;
            notifyState(state);
            Log.d(TAG, "录制开始");
            try {
                // 这里选择FileOutputStream而不是DataOutputStream
                FileOutputStream pcmFos = new FileOutputStream(tmpPCMFile);

                FileOutputStream wavFos = new FileOutputStream(tmpWavFile);
                if (createWav) {
                    writeWavFileHeader(wavFos, bufferSize, AUDIO_FREQUENCY, aRecord.getChannelCount());
                }
                aRecord.startRecording();
                byte[] byteBuffer = new byte[bufferSize];
                while (state.equals(WindState.RECORDING) && !isInterrupted()) {
                    int end = aRecord.read(byteBuffer, 0, byteBuffer.length);
                    pcmFos.write(byteBuffer, 0, end);
                    pcmFos.flush();
                    if (createWav) {
                        wavFos.write(byteBuffer, 0, end);
                        wavFos.flush();
                    }
                }
                aRecord.stop(); // 录制结束
                pcmFos.close();
                wavFos.close();
                if (createWav) {
                    // 修改header
                    RandomAccessFile wavRaf = new RandomAccessFile(tmpWavFile, "rw");
                    byte[] header = generateWavFileHeader(tmpPCMFile.length(), AUDIO_FREQUENCY, aRecord.getChannelCount());
                    Log.d(TAG, "header: " + getHexString(header));
                    wavRaf.seek(0);
                    wavRaf.write(header);
                    wavRaf.close();
                    Log.d(TAG, "tmpWavFile.length: " + tmpWavFile.length());
                }
                Log.i(TAG, "audio tmp PCM file len: " + tmpPCMFile.length());
            } catch (Exception e) {
                Log.e(TAG, "AudioRecordThread:", e);
                notifyState(WindState.ERROR);
            }
            notifyState(state);
            state = WindState.IDLE;
            notifyState(state);
            Log.d(TAG, "录制结束");
        }

    }

    private static String getHexString(byte[] bytes) {
        StringBuilder sb = new StringBuilder();
        for (byte b : bytes) {
            sb.append(Integer.toHexString(b)).append(",");
        }
        return sb.toString();
    }

    /**
     * AudioTrack播放音频线程
     * 使用FileInputStream读取文件
     */
    private class AudioTrackPlayThread extends Thread {
        AudioTrack track;
        int bufferSize = 10240;
        File audioFile = null;

        AudioTrackPlayThread(File aFile) {
            setPriority(Thread.MAX_PRIORITY);
            audioFile = aFile;
            int bufferSize = AudioTrack.getMinBufferSize(AUDIO_FREQUENCY,
                    PLAY_CHANNEL_CONFIG, AUDIO_ENCODING) * PLAY_AUDIO_BUFFER_TIMES;
            track = new AudioTrack(AudioManager.STREAM_MUSIC,
                    AUDIO_FREQUENCY,
                    PLAY_CHANNEL_CONFIG, AUDIO_ENCODING, bufferSize,
                    AudioTrack.MODE_STREAM);
        }

        @Override
        public void run() {
            super.run();
            state = WindState.PLAYING;
            notifyState(state);
            try {
                FileInputStream fis = new FileInputStream(audioFile);
                track.play();
                byte[] aByteBuffer = new byte[bufferSize];
                while (state.equals(WindState.PLAYING) &&
                        fis.read(aByteBuffer) >= 0) {
                    track.write(aByteBuffer, 0, aByteBuffer.length);
                }
                track.stop();
                track.release();
            } catch (Exception e) {
                Log.e(TAG, "AudioTrackPlayThread:", e);
                notifyState(WindState.ERROR);
            }
            state = WindState.STOP_PLAY;
            notifyState(state);
            state = WindState.IDLE;
            notifyState(state);
        }

    }

    private synchronized void notifyState(final WindState currentState) {
        if (null != onStateListener) {
            mainHandler.post(new Runnable() {
                @Override
                public void run() {
                    onStateListener.onStateChanged(currentState);
                }
            });
        }
    }

    public interface OnState {
        void onStateChanged(WindState currentState);
    }

    /**
     * 表示当前状态
     */
    public enum WindState {
        ERROR,
        IDLE,
        RECORDING,
        STOP_RECORD,
        PLAYING,
        STOP_PLAY
    }

    /**
     * @param out            wav音频文件流
     * @param totalAudioLen  不包括header的音频数据总长度
     * @param longSampleRate 采样率,也就是录制时使用的频率
     * @param channels       audioRecord的频道数量
     * @throws IOException 写文件错误
     */
    private void writeWavFileHeader(FileOutputStream out, long totalAudioLen, long longSampleRate,
                                    int channels) throws IOException {
        byte[] header = generateWavFileHeader(totalAudioLen, longSampleRate, channels);
        out.write(header, 0, header.length);
    }

    /**
     * 任何一种文件在头部添加相应的头文件才能够确定的表示这种文件的格式,
     * wave是RIFF文件结构,每一部分为一个chunk,其中有RIFF WAVE chunk,
     * FMT Chunk,Fact chunk,Data chunk,其中Fact chunk是可以选择的
     *
     * @param pcmAudioByteCount 不包括header的音频数据总长度
     * @param longSampleRate    采样率,也就是录制时使用的频率
     * @param channels          audioRecord的频道数量
     */
    private byte[] generateWavFileHeader(long pcmAudioByteCount, long longSampleRate, int channels) {
        long totalDataLen = pcmAudioByteCount + 36; // 不包含前8个字节的WAV文件总长度
        long byteRate = longSampleRate * 2 * channels;
        byte[] header = new byte[44];
        header[0] = 'R'; // RIFF
        header[1] = 'I';
        header[2] = 'F';
        header[3] = 'F';

        header[4] = (byte) (totalDataLen & 0xff);//数据大小
        header[5] = (byte) ((totalDataLen >> 8) & 0xff);
        header[6] = (byte) ((totalDataLen >> 16) & 0xff);
        header[7] = (byte) ((totalDataLen >> 24) & 0xff);

        header[8] = 'W';//WAVE
        header[9] = 'A';
        header[10] = 'V';
        header[11] = 'E';
        //FMT Chunk
        header[12] = 'f'; // 'fmt '
        header[13] = 'm';
        header[14] = 't';
        header[15] = ' ';//过渡字节
        //数据大小
        header[16] = 16; // 4 bytes: size of 'fmt ' chunk
        header[17] = 0;
        header[18] = 0;
        header[19] = 0;
        //编码方式 10H为PCM编码格式
        header[20] = 1; // format = 1
        header[21] = 0;
        //通道数
        header[22] = (byte) channels;
        header[23] = 0;
        //采样率,每个通道的播放速度
        header[24] = (byte) (longSampleRate & 0xff);
        header[25] = (byte) ((longSampleRate >> 8) & 0xff);
        header[26] = (byte) ((longSampleRate >> 16) & 0xff);
        header[27] = (byte) ((longSampleRate >> 24) & 0xff);
        //音频数据传送速率,采样率*通道数*采样深度/8
        header[28] = (byte) (byteRate & 0xff);
        header[29] = (byte) ((byteRate >> 8) & 0xff);
        header[30] = (byte) ((byteRate >> 16) & 0xff);
        header[31] = (byte) ((byteRate >> 24) & 0xff);
        // 确定系统一次要处理多少个这样字节的数据,确定缓冲区,通道数*采样位数
        header[32] = (byte) (2 * channels);
        header[33] = 0;
        //每个样本的数据位数
        header[34] = 16;
        header[35] = 0;
        //Data chunk
        header[36] = 'd';//data
        header[37] = 'a';
        header[38] = 't';
        header[39] = 'a';
        header[40] = (byte) (pcmAudioByteCount & 0xff);
        header[41] = (byte) ((pcmAudioByteCount >> 8) & 0xff);
        header[42] = (byte) ((pcmAudioByteCount >> 16) & 0xff);
        header[43] = (byte) ((pcmAudioByteCount >> 24) & 0xff);
        return header;
    }
}

其三、进入正题,音频的编制。
优先押界面如下:

参考资料

  • AudioRecord –
    developer.android.com
  • AudioTrack –
    developer.android.com

这里描绘图片描述

操作者可以于脚那个左右滑行,控制切割点的位置,时间轴的更动方式和编制的辰轴是不平等的,这个时空轴是动态的,是因此

linerLayout动态添加子View生成的,很简单,每个刻度我这边的凡60dp,你协调得根据需要变更,ll_wave_content是包裹

timeLine的爸控件。代码如下:

/**
     * 音频的时间刻度
     */
    private void timeSize() {
        timeLine = (LinearLayout)this.findViewById(R.id.ll_time_counter);
        tv_totalTime.setText(formatTime(totalTime)+"");
        timeLine.removeAllViews();
        totleLength = totalTime*DensityUtil.dip2px(60);
//      timeLine1.removeAllViews();
        ll_wave_content1.setLayoutParams(new FrameLayout.LayoutParams(totalTime*DensityUtil.dip2px
(60),LayoutParams.MATCH_PARENT));
        ll_wave_content.setLayoutParams(new FrameLayout.LayoutParams(totalTime*DensityUtil.dip2px
(60),LayoutParams.MATCH_PARENT));
        timeLine1.setLayoutParams(new RelativeLayout.LayoutParams(totalTime*DensityUtil.dip2px
(60),LayoutParams.MATCH_PARENT));
        for(int i=0;i<totalTime;i++){
        LinearLayout line1=new LinearLayout(this);
        line1.setOrientation(LinearLayout.HORIZONTAL);
        line1.setLayoutParams(new LayoutParams(DensityUtil.dip2px
(60),LinearLayout.LayoutParams.WRAP_CONTENT));
        line1.setGravity(Gravity.CENTER);
        TextView timeText=new TextView(this);
        timeText.setText(formatTime(i));
        timeText.setWidth(DensityUtil.dip2px(60)-2);
        timeText.setGravity(Gravity.CENTER_HORIZONTAL);
        TextPaint paint = timeText.getPaint();
        paint.setFakeBoldText(true); //字体加粗设置
        timeText.setTextColor(Color.rgb(204, 204, 204));
        View line2=new View(this);
        line2.setBackgroundColor(Color.rgb(204, 204, 204));
        line2.setPadding(0, 10, 0, 0);
        line1.addView(timeText);
        line1.addView(line2);
        timeLine.addView(line1);
        }

对立其他格式的音频文件,wav格式的相对比较简单,只是以pcm之上添加了脑部,wav的满头格式如下:

这里描绘图片描述

吓,看的不懂得的同窗可机关百度活谷歌,有好多稿子介绍;

既然pcm格式加上wav的头部就不过,那剪辑还是合成就很方便了,合成的计奉上:

        /**
         * merge *.wav files 
         * @param target  output file
         * @param paths the files that need to merge
         * @return whether merge files success
         */
        public static boolean mergeAudioFiles(String target,List<String> paths) {
            try {
                FileOutputStream fos = new FileOutputStream(target);            
                int size=0;
                byte[] buf = new byte[1024 * 1000];
                int PCMSize = 0;
                for(int i=0;i<paths.size();i++){
                    FileInputStream fis = new FileInputStream(paths.get(i));
                    size = fis.read(buf);
                     while (size != -1){
                        PCMSize += size;
                        size = fis.read(buf);
                    }
                    fis.close();
                }
                PCMSize=PCMSize-paths.size()*44;
                WaveHeader header = new WaveHeader();
                header.fileLength = PCMSize + (44 - 8);
                header.FmtHdrLeth = 16;
                header.BitsPerSample = 16;
                header.Channels = 1;
                header.FormatTag = 0x0001;
                header.SamplesPerSec = 16000;
                header.BlockAlign = (short) (header.Channels * header.BitsPerSample / 8);
                header.AvgBytesPerSec = header.BlockAlign * header.SamplesPerSec;
                header.DataHdrLeth = PCMSize;
                byte[] h = header.getHeader();
                assert h.length == 44;
                fos.write(h, 0, h.length);
                for(int j=0;j<paths.size();j++){
                    FileInputStream fis = new FileInputStream(paths.get(j));
                    size = fis.read(buf);
                    boolean isFirst=true;
                    while (size != -1){
                        if(isFirst){
                            fos.write(buf, 44, size-44);
                            size = fis.read(buf);
                            isFirst=false;
                        }else{
                            fos.write(buf, 0, size);
                            size = fis.read(buf);
                        }
                    }
                    fis.close();
                }
                fos.close();
            } catch (Exception e) {
                e.printStackTrace();
                return false;
            }
            return true;
        }

剪辑的近乎,注意的凡,需要事先用你操作的wav文件塞进去,进行头文件之格式解析,之后就是只是终有您待去的帧区间,之后便是系的逻辑运算了,这里自己虽不一一啰嗦了:

public class CheapWAV extends CheapSoundFile {
    public static Factory getFactory() {
        return new Factory() {
            public CheapSoundFile create() {
                return new CheapWAV();
            }
            public String[] getSupportedExtensions() {
                return new String[] { "wav" };
            }
        };
    }

    // Member variables containing frame info
    private int mNumFrames;
    private int[] mFrameOffsets;
    private int[] mFrameLens;
    private int[] mFrameGains;
    private int mFrameBytes;
    private int mFileSize;
    private int mSampleRate;
    private int mChannels;
    // Member variables used during initialization
    private int mOffset;

    public CheapWAV() {
    }

    public int getNumFrames() {
        return mNumFrames;
    }

    public int getSamplesPerFrame() {
        return mSampleRate / 50;
    }

    public int[] getFrameOffsets() {
        return mFrameOffsets;
    }

    public int[] getFrameLens() {
        return mFrameLens;
    }

    public int[] getFrameGains() {
        return mFrameGains;
    }

    public int getFileSizeBytes() {
        return mFileSize;        
    }

    public int getAvgBitrateKbps() {
        return mSampleRate * mChannels * 2 / 1024;
    }

    public int getSampleRate() {
        return mSampleRate;
    }

    public int getChannels() {
        return mChannels;
    }

    public String getFiletype() {
        return "WAV";
    }

//    public int secondsToFrames(double seconds) {
//        return (int)(1.0 * seconds * mSampleRate / mSamplesPerFrame + 0.5);
//    }


    public void ReadFile(File inputFile)
            throws java.io.FileNotFoundException,
                   java.io.IOException {
        super.ReadFile(inputFile);
        mFileSize = (int)mInputFile.length();

        if (mFileSize < 128) {
            throw new java.io.IOException("File too small to parse");
        }

        FileInputStream stream = new FileInputStream(mInputFile);
        byte[] header = new byte[12];
        stream.read(header, 0, 12);
        mOffset += 12;
        if (header[0] != 'R' ||
            header[1] != 'I' ||
            header[2] != 'F' ||
            header[3] != 'F' ||
            header[8] != 'W' ||
            header[9] != 'A' ||
            header[10] != 'V' ||
            header[11] != 'E') {
            throw new java.io.IOException("Not a WAV file");
        }

        mChannels = 0;
        mSampleRate = 0;
        while (mOffset + 8 <= mFileSize) {
            byte[] chunkHeader = new byte[8];
            stream.read(chunkHeader, 0, 8);
            mOffset += 8;

            int chunkLen =
                ((0xff & chunkHeader[7]) << 24) |
                ((0xff & chunkHeader[6]) << 16) |
                ((0xff & chunkHeader[5]) << 8) |
                ((0xff & chunkHeader[4]));

            if (chunkHeader[0] == 'f' &&
                chunkHeader[1] == 'm' &&
                chunkHeader[2] == 't' &&
                chunkHeader[3] == ' ') {
                if (chunkLen < 16 || chunkLen > 1024) {
                    throw new java.io.IOException(
                        "WAV file has bad fmt chunk");
                }

                byte[] fmt = new byte[chunkLen];
                stream.read(fmt, 0, chunkLen);
                mOffset += chunkLen;

                int format =
                    ((0xff & fmt[1]) << 8) |
                    ((0xff & fmt[0]));
                mChannels =
                    ((0xff & fmt[3]) << 8) |
                    ((0xff & fmt[2]));
                mSampleRate =
                    ((0xff & fmt[7]) << 24) |
                    ((0xff & fmt[6]) << 16) |
                    ((0xff & fmt[5]) << 8) |
                    ((0xff & fmt[4]));

                if (format != 1) {
                    throw new java.io.IOException(
                        "Unsupported WAV file encoding");
                }

            } else if (chunkHeader[0] == 'd' &&
                       chunkHeader[1] == 'a' &&
                       chunkHeader[2] == 't' &&
                       chunkHeader[3] == 'a') {
                if (mChannels == 0 || mSampleRate == 0) {
                    throw new java.io.IOException(
                        "Bad WAV file: data chunk before fmt chunk");
                }

                int frameSamples = (mSampleRate * mChannels) / 50;
                mFrameBytes = frameSamples * 2;
                mNumFrames = (chunkLen + (mFrameBytes - 1)) / mFrameBytes;
                mFrameOffsets = new int[mNumFrames];
                mFrameLens = new int[mNumFrames];
                mFrameGains = new int[mNumFrames];

                byte[] oneFrame = new byte[mFrameBytes];

                int i = 0;
                int frameIndex = 0;
                while (i < chunkLen) {
                    int oneFrameBytes = mFrameBytes;
                    if (i + oneFrameBytes > chunkLen) {
                        i = chunkLen - oneFrameBytes;
                    }

                    stream.read(oneFrame, 0, oneFrameBytes);

                    int maxGain = 0;
                    for (int j = 1; j < oneFrameBytes; j += 4 * mChannels) {
                        int val = java.lang.Math.abs(oneFrame[j]);
                        if (val > maxGain) {
                            maxGain = val;
                        }
                    }

                    mFrameOffsets[frameIndex] = mOffset;
                    mFrameLens[frameIndex] = oneFrameBytes;
                    mFrameGains[frameIndex] = maxGain;

                    frameIndex++;
                    mOffset += oneFrameBytes;
                    i += oneFrameBytes;

                    if (mProgressListener != null) {
                        boolean keepGoing = mProgressListener.reportProgress(
                            i * 1.0 / chunkLen);
                        if (!keepGoing) {
                            break;
                        }
                    }
                }

            } else {
                stream.skip(chunkLen);
                mOffset += chunkLen;
            }
        }
    }

    public void WriteFile(File outputFile, int startFrame, int numFrames)
            throws java.io.IOException {
        outputFile.createNewFile();
        FileInputStream in = new FileInputStream(mInputFile);
        FileOutputStream out = new FileOutputStream(outputFile);

        long totalAudioLen = 0;
        for (int i = 0; i < numFrames; i++) {
            totalAudioLen += mFrameLens[startFrame + i];
        }

        long totalDataLen = totalAudioLen + 36;
        long longSampleRate = mSampleRate;
        long byteRate = mSampleRate * 2 * mChannels;

        byte[] header = new byte[44];
        header[0] = 'R';  // RIFF/WAVE header
        header[1] = 'I';
        header[2] = 'F';
        header[3] = 'F';
        header[4] = (byte) (totalDataLen & 0xff);
        header[5] = (byte) ((totalDataLen >> 8) & 0xff);
        header[6] = (byte) ((totalDataLen >> 16) & 0xff);
        header[7] = (byte) ((totalDataLen >> 24) & 0xff);
        header[8] = 'W';
        header[9] = 'A';
        header[10] = 'V';
        header[11] = 'E';
        header[12] = 'f';  // 'fmt ' chunk
        header[13] = 'm';
        header[14] = 't';
        header[15] = ' ';
        header[16] = 16;  // 4 bytes: size of 'fmt ' chunk
        header[17] = 0;
        header[18] = 0;
        header[19] = 0;
        header[20] = 1;  // format = 1
        header[21] = 0;
        header[22] = (byte) mChannels;
        header[23] = 0;
        header[24] = (byte) (longSampleRate & 0xff);
        header[25] = (byte) ((longSampleRate >> 8) & 0xff);
        header[26] = (byte) ((longSampleRate >> 16) & 0xff);
        header[27] = (byte) ((longSampleRate >> 24) & 0xff);
        header[28] = (byte) (byteRate & 0xff);
        header[29] = (byte) ((byteRate >> 8) & 0xff);
        header[30] = (byte) ((byteRate >> 16) & 0xff);
        header[31] = (byte) ((byteRate >> 24) & 0xff);
        header[32] = (byte) (2 * mChannels);  // block align
        header[33] = 0;
        header[34] = 16;  // bits per sample
        header[35] = 0;
        header[36] = 'd';
        header[37] = 'a';
        header[38] = 't';
        header[39] = 'a';
        header[40] = (byte) (totalAudioLen & 0xff);
        header[41] = (byte) ((totalAudioLen >> 8) & 0xff);
        header[42] = (byte) ((totalAudioLen >> 16) & 0xff);
        header[43] = (byte) ((totalAudioLen >> 24) & 0xff);
        out.write(header, 0, 44);

        byte[] buffer = new byte[mFrameBytes];
        int pos = 0;
        for (int i = 0; i < numFrames; i++) {
            int skip = mFrameOffsets[startFrame + i] - pos;
            int len = mFrameLens[startFrame + i];
            if (skip < 0) {
                continue;
            }
            if (skip > 0) {
                in.skip(skip);
                pos += skip;
            }
            in.read(buffer, 0, len);
            out.write(buffer, 0, len);
            pos += len;
        }

        in.close();
        out.close();
    }
};

哼了,最近一段时间确实太忙碌了,其他种类之保安升级什么的,搞的头皮发麻。有啊问题可留言交流。

声称:音频的剪裁这个类似的原作者的一个开源小品种为音乐快剪,我只是在那个基础及拓展了改动!其他格式的音频MP3的口舌还吓,但是ACC或者M4a格式的推就比较累,需要展开再次编码,建议利用FFMPEG进行格式重新编码裁剪,至于FFMPEG的android平台移植,GITHUB上生无数,很多人数之博客也发生介绍,个人建议不要自己编译(您时间宽裕除外),很多早已编译好了,直接使用即可。

Github地址(大家下载的时顺便让个star也是针对性作者劳动成果的大势所趋,谢谢):
https://github.com/T-chuangxin/VideoMergeDemo

相关文章