Archiv für die Kategorie „Allgemeines“
The production of subtitles (without expensive software) is not that difficult. One of the best choices is to use Aegisub that works with .ass file format. Aegisub allows to use self-defined styles for different subtitles. The .ass file format can be used later for Blu-ray disc subtitle rendering with avs2bdnxml or for later dvd authoring. The following is the call for PAL (720 x 576 resolution) – you have to use Avisynth as a frameserver:
avs2bdnxml -t Undefined -l und -v 576i -f 25 -a1 -p0 -b0 -u0 -m3 -o target-xml-file.xml source-avi-via-avisynth.avs
Using Avisynth as a frameserver, you can use almost every file format as a base to render subtitles. The .avs file for your language file looks like the following:
LoadPlugin("VSFilterMod.dll") LoadPlugin("PATH-TO-dvd2avi-OR-dgmpgdec\dvd2avi_dgmpgdec158\DGDecode.dll") MPEG2Source("d2v-fileproduced-by-dvd2avi-or-dgmpgdec-if-you-use-a-dvd.d2v") MaskSubMod("subtitle-file-in-ass-file-format.ass",720,576,25,85246)
You see that you need the VSFilterMod.dll and the DGDecode.dll plugin. Then you have to prepare a .d2v file in case you work with a dvd. The MaskSubMod() call uses PAL resolution of 720 x 576, 25 fps, and the number of frames (till the end of the last subtitle). For dvd authoring, one has to reduce the color depth, because the dvd specification allows only for 3+1 colors (one color is transparency). BDSup2Sub.jar is a good choice to do that.
Replace subtitles after authoring
However, not all (even professionell) dvd authoring software suites allow to import all 3+1 colors. E.g. Adobe Encore works internally with 3+1, but it allows to import only 2+1. A clear bug! But it seems nobody from Adobe wants to fix it. However, you can replace subtitles:
- Demux the authored dvd with PgcDemux
- produce good looking subtitles with avs2bdnxml and BDSup2Sub
- check the .sup files with SubtitleCreator
- remux with Muxman or IfoEdit (preferrable Muxman as it allows to save project files)
- and merge the new authored dvd with the previously created menus by using VobBlanker.
Convert BDN .xml+.png to .sup
The following is the call for BDSup2Sub to produce dvd subtitles. First the call for 2 colors, then the call for 3 colors. Please read the manual and play with the options directly in BDSup2Sub to get a feeling what it does. The colors you choose have a direct effect on the way BDSup2Sub reduces them. In case of any problems you do not need to re-render the subtitles. You can replace colors easily with a batch-script while using XnView. Then you can proceed with the command line.
java -jar PATH-TO-BDSup2Sub\BDSup2Sub.jar target-xml-file.xml output-idx-2-colors.idx /lang:en /atr:137 /ltr1:41 /ltr2:42 /acrop:0
java -jar PATH-TO-BDSup2Sub\BDSup2Sub.jar target-xml-file.xml output-idx-3-colors.idx /lang:en /atr:137 /ltr1:41 /ltr2:180 /acrop:0
The result of BDSup2Sub.jar are .sup files. With SubtitleCreator you can open them, apply new color code (if you like), and export them to single images (the actual subtitles) into a new folder. Be aware that colors are stored centrally in the CLUT (color lookup table) and not directly in the videostream. It is easier to replace colors later after authoring with DVDSubEdit. You can use the .xml file produced by avs2bdnxml to extract the relevant information (e.g. start/ end time code, size, position on the screen, etc.) via export to a spreadsheet (open it with IE, right mouse-click and export to Excel). There are also small commandline tools available to convert from .xml to .cvs or any .tab-based format. Please be also aware that Aegisub uses milliseconds whereas avs2bdnxml (25 in case of PAL, 29.97 for NTSC) outputs to frames (see the last part of the time-code). From these information and writing a short
dir /B folder-with-images > file-with-image-names.txt
you can write the subtitle image-names into a file, import this file also to the spreadsheet, and export everything into a file suitable for dvd authoring software suites. Then you can use a script language to create a subtitle file format of your choice. Then you can author your dvd.
Internationalization – how to handle right to left (RTL) languages
So far, so good. That works easy with left-to-right languages. With Aegisub, you can choose another font, so not only latin-based languages are displayed properly, but also languages like Gujarati, Hindi, Khmer, Japanese, Traditional Chinese, etc.
However, if you try to import Arabic, Farsi, Hebrew, or any other right-to-left (RTL) language, you will encounter many problems. In short: everything is messed up! The punctuation marks (commata, quotes, etc. ) are (almost all) at wrong places. We do not want to mention that other software (e.g. Microsoft Excel/ Office) handles RTL properly. If you use an external software solely made up to render subtitles, I have not found any who was capable of RTL import. Maybe the real expensive software can do it, but who wants to spend >2000,- € just to render subtitles (and they won’t look better than the method described above)? The reason for the low capability of these software apps is that they do not work with the unicode control characters. Punctuation marks seem to be handled LTR (left-to-right) instead of RTL. Although several authors claim they do support RTL and they also advertise with RTL capabilities, they don’t (I just test their software!).
So, how to proceed? Let’s take the case you have already time-codes from another language. Bring the subtitle file into a format, so that:
one line = one subtitle (with time codes)
The .ass format does this (Aegisub). Then open it with Windows Notepad (or a unicode text editor, but NO! wordprocessor -> MS Word is NOT a text editor). Insert a “\t” (TAB escape sequence) between time codes and text, e.g. using .ass file format:
Dialogue: 0,0:00:12.94,0:00:17.82,person 1,person 1,0000,0000,0000,,<-INSERT HERE TAB SPACE ->TEXT
You can do that with search&replace. Save the file as unicode (for Excel or another spreadsheet editor) and open it with a spreadsheet calculator. Then you have two columns – one with style and time code infos, one with pure text. Format the text column as you like it for RTL or insert (in case of a new translation) RTL text. Then copy both columns and copy them into a new text file (again: use Windows Notepad, a small tool which is highly underestimated). Remove the “\t” (TAB space) with search&replace and save the file as “utf-8″ for Aegisub. Open Aegisub and look on the preview how it looks like.
Maybe everything is ok – but I doubt it. Many problems occur for the following incidents:
- a punctuation mark is directly placed before an automatic or manual line-break. They appear at the beginning of the sub and not where they should be (at the end of the line)
- the same occurs at the end of sentences or lines (commata, dots, etc.) – they appear not at the end of the line
This makes sense because from a LTR perspective they are placed at the end of the line (sentence, etc). But this is not true from a RTL perspective. Mixing both writing orders is difficult. Now comes the hard manual work (if you know an automatic work-around, please inform me!). Open with Windows Notepad the same subtitle file and read the following website from Microsoft about unicode control characters. Proceed as follows:
- enable unicode chars to be displayed (“show unicode control chars”)
- go to the line/ place the cursor directly before e.g. the comma
- right mouseclick: insert unicode control char -> choose “LRE” (“Start of left-to-right embedding (LRE)”)
- either insert the comma (or any other char) OR go to the place dirctly after the comma (if it is already there) and insert unicode control char -> choose “PDF” (“Pop directional formatting (PDF)”)
- Reopen Aegisub and look what has changed
- Repeat steps 1-5 till everything looks ok. The rendering with avs2bdnxml won’t fail if everything is ok in Aegisub.
You can prevent it if you are disciplined and insert the LRE/ PDF control chars while creating your RTL text for the very first time.
Special conisderation – interlaced video material and subtitles
Last problem – interlaced material. In case of interlaced video material the y0/ y1 coordinates have to be even and not odd. This is not documented very well and it took my a long time to figure it out. Someone pointed out the spumux tutorial where the problem is described. However, avs2bdnxml does not recognize this problem. It’s a bug. You will see this problem only while using hardware dvd players. I never encountered it on PC, MAC, etc. Then your subtitles look messed up again:
To prevent this, you have to analyze the .xml metafile created by avs2bdnxml for any odd y coordinates (which different types are present?) and replace them with
y_new = y_old + 1
You can replace it within the .xml file by using search&replace if you know what you’re searching for. A good choice is therefor to convert the .xml to .csv or .tab and do a short descriptive statistical analysis (i.e. tables). Then you immediately see the different types of y coordinates. There will be only a few of these subtitle types and a manual replacement is faster than coding a script to work directly within the .xml (unless you are used to do that). Again – no need to re-render the subtitles. This won’t have any effect on the problem. It seems spumux and dvdauthor are aware of the problem, but I had problems to use the mpeg2 videostream together with spumux (in case of importing pre-rendered subtitles) and using spumux alone does not produce results that are comparable to the ones from avs2bdnxml. The quality of avs2bdnxml is superb. Maybe if someone has time you can code your own GUI based on ImageMagick which can convert text to images and is very powerfull.
Unfortunatly there is no software yet available that prevents all the problems mentioned here. However, the results while using the method described above produces nice looking subtitles for dvd authoring. They really look good even on a TV HD screen. Don’t forget to tweak on the colors with DVDSubEdit. The usage of ghostboxes (see Aegisub for how to do that) looks better than fonts with outline. Both solutions make use of antialiasing. In case of ghost boxes it is advisable to use a transparency of 12 (see DVDSubEdit) instead of 15. Then you can see the background (i.e. the movie) through the subtitles, but no too much. This looks very pleasant for the eyes and supports the readability.
That’s all. Happy subtitleling and rendering.
The widespread ability to precognize is surely something worth to explore scientifically. If it exists, the researcher who publishes a positive outcome will achieve eternal fame . Now, the JPSP will publish a study who tries to validate human’s capability to precognize. A preprint of the experimental study of Bem is available here. 9 experiments were performed with about 1000 participants – quite a huge undertaking! And in 8 of 9 of them it seems that precongition exists. But before you think now that you can also precnogize and go to the next casino and lose your money, let us review how this research was realized. Two articles critizise the original study: One on methodological grounds (Wagenmakers et al) and the other by doing a replication (Galak & Nelson). The replication study failed and Wagenmakers et al. come to the conclusion that the problems of the study of Bem are not related to the fact whether precognition is possible or not but due to the fact that the underlying method was not properly realized. Read for yourself. I think it is worth to mention also that the methodological review uses Bayesian methodology.
There is a new interview with S.N. Goenka on Vipassana and its benefits available at the Indian Express titled ‘You have to work out your own salvation.’ Although all interviews with Goenkaji are very inspiring, I like this one because it is reduced to its core essence of what’s important in Vipassana. Not only that he is very clear about the fact the he does not teach Buddhism, there is also no place for any Guru, and no other rites or rituals are involved. It is just pure science of mind and matter by observation ‘within.’ Thus, one confronts reality as it is. Enjoy it – and practice!
Using the free versions of Google Sketchup and SU Podium, you can render graphics quite easy:
At first, if you are not familiar with R, you need to install R from CRAN and (under Win32) Tinn-R. Tinn-R enables to control R via script. If you use Linux, just install R via your distribution of choice and then there is an add-on for Emacs to control R (ESS). The next is to start R via Tinn-R or Emacs. Don’t forget to choose you ‘hotkeys‘ in Tinn-R (or any other editor of your choice to control R).
For multiple imputation, I choose the R-package ‘mice‘ from van Buuren et al. You have to install it manually. There are also other packages that deal with it, see:
If you are not familiar with the R-style to formulate linear models, start with
or read the usual intros and manuals that are linked via CRAN (or the contributed documentations) – otherwise search on the internet with your search machine of choice with the add-on ‘cran’ like ‘mulitple imputation cran’ or ‘missing data cran’, etc.
The folllowing are excerpts from the man-pages of ‘mice’-package commandos.
?read.table # import of data - see tutorials and intros to R # of 'how to import data' library(help=mice)# what is inside package 'mice'? library(mice) # load library for MI data(nhanes) # use data from 'mice'-package str(nhanes) nhanes # show data ?mice # produce Multivariate Imputation by Chained Equations imp <- mice(nhanes) imp str(imp) ?lm.mids # Performs repeated linear regression # on multiply imputed data set lm.mids # R-source code of 'lm.mids' fit <- lm.mids(bmi~hyp+chl,data=imp) fit summary(fit) str(fit) ?pool # pool(fit) # pool results summary(pool(fit))# better output pool # R-source code of 'pool'
That’s all. Multiple imputation (ordinary ANOVA) is quite easy to peform in R.
Slashdot reported on Sunday Sept 20 the outcome of a new study on the apropriateness of fMRI research which can be read in much more detail on Wired. This study can be seen as a serious warning to ‘blindly’ believe results of fMRI research without doing proper quality management. Craig M. Bennett and colleagues realized an experiment which is quite close to a gag of ‘Monthy Python’s Flying Circus.’ However, the experiment is serious. They scanned a mature, but dead atlantic salmon. The experimental task is worth to be mentioned in its original language:
“The salmon was shown a series of photographs depicting human individuals in social situations with a specified emotional valence. The salmon was asked to determine what emotion the individual in the photo must have been experiencing.”
Several photos of human beings were shown to the salmon and reactions were measured. Reviewing the results – surprise, surprise – they showed clear activity in the dead salmon’s brain:
“Several active voxels were discovered in a cluster located within the salmon’s brain cavity.”
The dead salmon’s ‘emotional reactions’ look quite impressive at it can be seen in the figure right next to this text. It seems the (dead) salmon reacted to photos of human beings. Unfortunately, the study was turned down by several publications. However, a poster is available. Thinking about the mass of research in the field of fMRI, it seems a little bit confusing what we can believe and what not. What is needed is a good control of random but significant voxels. Additionally, we should not take brain research too serious. But false positives should be taken very seriously. At last, I want to point also to an older posting on this blog about the works of Ed Vul and his colleagues at the MIT. Further infos dedicated to fMRI and the dead salmon are available on Craig Bennett’s personal blog.
Bennett CM, Baird AA, Miller MB, and Wolford GL. (submitted) Neural Correlates of Interspecies Perspective Taking in the Post-Mortem Atlantic Salmon: An Argument For Multiple Comparisons Correction.
The Buchardichurch in Halbertstadt offers the longest piece of music. It lasts for incredible 639 years and started 2001. It is called „Organ2 / ASLSP“ and written by John Cage. The longest note takes 58 years, the shortest probably several months. There is a short report available on spiegel-online.