PhD defense – Astronomy and Astrophysics

so it’s wonderful to see lot of faces and I’m very happy to be here and before I begin, I just wanted to thank a lot of people, firstly to CONACyT Mexico which funded my research for the last four and a half years and my home institution INAOE that took care of me in so many things that I can’t imagine my supervisors, collaborators, my mentors and my friends, family who are joining me on Skype from India and then to everybody else who is present here, thank you for coming. So my PhD defense is– my title of the talk is going to be “Near-infrared polarimetry of the interstellar medium” My thesis is being supervised by Dr Abraham Luna and my main collaborators during the last four years are Dr Divakara Maya and professor Luis Carrasco of INAOE. I’ll just begin by giving you an overview of what this thesis contains. So at first I will give you details on why and how, what is the objective of thesis, giving an introduction on interstellar medium, star formation, polarization and then what facilities are we going to use for this study and then I will give you a description of all the methodologies that we used in work, as well as some of the software pipelines that used and calibrations of the instrument and finally our results, performance of the instruments, as well as some science observations. So let’s begin the introduction. The ISM is filled with matter such as dust, molecules and there is gas, cosmic rays and magnetic fields so but all in this and what is the most important is the star formation process where giant clouds condense into clumps and then cores and then eventually form stars. So this is a very dynamic environment. In order to understand our observable universe star formation process is a very important topic and there are still some open questions in our understanding of star formation process. As such, a key problem is what role do magnetic fields play in the star formation process? So the theory of star formation says that: basically we have models of strong magnetic fields, weak magnetic fields and somewhere where there is turbulence and magnetic fields. Strong magnetic fields show that the gravitational collapse is inhibited by magnetic fields, whereas in weak field models gravity dominates and then starts are formed, but what really happens is that we observe somewhere where there is significant turbulence and magnetic fields. So based on theory how the magnetic field direction should look in observation is that, either the magnetic field is parallel to the direction of molecular cloud or perpendicular to the long axis, but perhaps the only way to resolve all these problems is to observationally study magnetic fields So how do we observe magnetic fields? There are various physical phenomena that happen in the ISM and of which the four common methods in which magnetic fields have been observed through polarization are Faraday rotation, Synchrotron, Zeeman effect and dust polarimetry So in this talk I’ll be concentrating more on dust polarimetry where the dust grains get aligned to the local magnetic field in the interstellar medium which causes polarization because of dust emission and dust absorption. Here, in the figure you see an example of map of magnetic field that is produced by dust emission from Planck and then from near-infrared polarimetry that is from MIMIR instrument. So how does this work? if we actually look at the graphical picture the light from the background star passes through the interstellar medium and then the dust that is aligned to the local magnetic field absorbs the radiation from this and at far infrared and sub-millimeter it actually emits a polarization that is perpendicular to the direction of magnetic field. Basically these magnetic fields are weak around 5 micro Gauss. Whereas, if we got to near infrared and optical you know the same light from the background star gets dichroic extinction because of these dust grains and what we observe here is a polarization that is parallel to the direction of magnetic field. So in order to see what scales can be observed through these polarization observation, we need to see and especially these polarization are very weak from 1 to 3 percent, so this needs to be understood that we are looking at

very weak polarization caused by weak magnetic fields in the interstellar medium So to observe these polarization and magnetic field directions we are looking at regions of star formation from large sizes of clouds to eventually disks. So this if we translate into our observations what we see is that the clouds have a size around (that this distance) 20 arc minutes whereas the disks are very smaller in size. So we need to now define what observations can we possibly make depending either on sub-millimeter or near-infrared polarimetry. If we see the physical processes that happen, the near-infrared polarimetry, the dichroic absorption peaks within 0.5 microns to somewhere around 5 microns. Whereas, dust emission starts to increase from this region So if we see what are the advantages between both infrared and sub-millimeter The near-infrared can probe diffuse regions of interstellar medium, whereas sub-millimeter can probe dense regions of cores and star forming disks and at the same time what we also see is that near-infrared polarimetry has been established over time, whereas sub-millimeter is new technology and is very expensive and competitive. So in my thesis we will be concentrating on dust absorption near-infrared polarimetry which will be used to probe these diffuse regions of the ISM. So we come back to the same sketch of star formation, what we can probe now in all these different regions is that we will be probing only clouds and filaments that are around these physical sizes and angular sizes somewhere around few arc minutes, more than 10 arc minutes and hence now we defined what regions we are going to probe, we need to define some observing goals with background star light polarimetry So, 1) we need to define what area can we recover, that is what sizes are we going to see and 2) what near-infrared band is the best for observation of this region and 3) at what sensitivity do we need to have our instruments and 4) at what sampling do we need to actually probe, to have enough stellar density to trace the magnetic field directions and 5) to what signal to noise ratio and so after defining all this is finally an understanding is that, the objective of my thesis is to meet instrumental and scientific requirements to observe magnetic field directions using background star light polarimetry So how am I going to fulfill this objective is by asking key logical questions and these which actually create the path for scientific discoveries. So the questions that I ask is: 1) what instrument is required to realize these studies 2) if so what characteristics and sky performance should the instrument yield 3) what software tools and calibration methods are required to transform the data into science ready form 4) how does the instrument perform in comparison to the archival data and then finally 5) what new regions can be studied with these instruments. So I’ll follow my thesis by answering each of these questions which follow as chapters in order to actually fulfill our objective that was discussed earlier. So let’s start by the first question what instruments. I start by introducing the Cananea near-infrared camera at the 2.1 meter telescope in Sonora Mexico operated by INAOE and the contents of this chapter have been published by Carrasco et al. So if you see CANICA camera is basically a mechanical cryostat that is cooled by liquid nitrogen and there is the optical setup is here and if you see the optical setup you know there is a collimating lens which is here and then there are two filter wheels and a focusing system that focuses the f/12 beam from the telescope to f/6 on the detector and next the main filters that we actually will be using in the near-infrared are primary broadband J, H, and K whereas there is also other narrow band filters available in the instrument and this is how the instrument looks after it is mounted. So now we have the camera but that’s not enough for polarimetric studies, so we need to also introduce you to the infrared polarimeter, which is basically attached to the camera and in between the camera and the telescope by a mechanical assembly The instrument has a half-wave plate and a linear polarizer and these are connected to a rotating system that is controlled by a stepper motor that controls the modulation of these optical elements. In order to understand how

these Palani meters are operating we just need to see the way it’s mounted to the telescope and this is the final how it looks on the telescope and then what these elements are basically a half wave plate is an optical element that changes the face of the incoming polarization light and whereas the polarizer is that passes only light of a certain polarization form and these are some of the physical lab parameters of the optics that have been used in the instrument to see how these operate we need to go back to the physics of polarization very any electromagnetic radiation that actually translates an edit on this plane so this get me describe the equation of ellipse so in order but we cannot directly get the amplitude and phase of the electromagnetic radiation so Stokes actually divides the parameters of four parameters that can be used to see the intensity of the polarized light that’s called like Q and P so in our observations we are going to be measuring the Stokes parameters basically and how we are going to do this is by calculating what we get from the output what is the input polarization so if you see in our setup then there is a half and the detector so there’s an incoming polarization and which is then protected by the half wave plate and then this detector fits analyzer and then to the detector so in order to get what angles we need to rotate the half wave plate we will be using we used a system of correlation developed by mother so basically when light passes through each optical system is polarization state changes so based on the equation of order B divides the system where that we find that rotate in the half wave plate at four angles zero 22.5 4567 we can get the polarization state especially the minion polarization that is Q and you hence what it is important to notice that in other observations what we are observing to be getting is the flux of the stars at each of these angles of the half wave plate so the essential method to do this is by photometric and of the stars that we get on the image now that we have introduced the instrument let’s go and ask the second question what characteristic and performance should be instrument yield what is I start by introducing of the characteristics of the jenika detector as well as performance and some basic introduction how how infrared detectors work is that the photons fall on the semiconductor material and there’s a photoelectric effect and then as generated which are then amplified and converted it becomes so infinita what we’re using is a mere pretending the right detector of thousand by thousand pixels and putting it on the telescope will you the plate scale of five three two arc seconds with the field of mio five point five in the square and then here you can see an image of the flat where there’s basically four quadrants of the detector and then there’s thousand by thousand pixels and each quadrant is spread separately that is for us to see that how images are formed after observations are just here show a figure that what happens is it once the image is read it passes to be preamplifier circuit and then to various electronics here and then to an analog to digital converter and that just gets converted into fixed image so the main methodology that is used in technical just to be noted is that the readout is called correlated double sampling method where we first we set the detector that means flush out all the charges and then we because there is no shattering infrared so once you really test and then after you read then you integrate and then we read which gives you the first read is called the biased image first the integrated read is called the raw and the difference of these tools called seriously so when all this process is happening there is also certain detector parameters that affect the quality of the image such as conversion gain that will be not nice and in narrative so gain is what happens now gain is the relation between the photoelectrons detective to becomes that we measure and then dr. in yourself the noise generated because recombinations and thus reorganized the disposal due to the electronics and linear response of the detector so before we start talking about sites we need to find all these values and characterize the detector so that we are able to get accurate one with data so the question again be basically calculated from photon transfer but here is the signal signal this is the noise

in the middle price and so we come down flat from 1 to 60 second and for a box region we measure the signal and the standard deviation and this was repeated throughout the detector array and then we get the plot of signal covariance where we can see here is their variance blood and there’s the signal and the gain is measured that the scope of this plot which is 1 by this and with the gain of this so such measurements in order to get a really accurate when you kind of we’re done 12 the detector 4000 24 times and then here is the distribution of the all the game measurements and we fit a Gaussian to the histogram and you get the mean game value which is estimated here and next is that we have to in the darker so similarly that’s were obtained by closing the filter means slide and the different exposure time and there was something dark count in each image was measured for this is for for example this block is for one pixel but you can see that there’s a high increase of the dark count and then there’s a linear increase it means zoom in here you see this increase and then we fit the first order polynomial to actually get at a trend and so this was this measurement was repeated for all the pixels in the detector so that is 1k by 1k to get the mean that current value of the instrument next we also have to correct for linearity this is an important step because depending on the brightness of the stars you know if the detector is does not mean everybody may not get the actual plus so basically here what you see is that what is the response of the detector so this here on the y-axis is the comes that is measured so if you see close here is the saturation so I close to the saturation level the detector is around 10% nonlinear where has been the concert pillow you know there the linearity is spreading also to correct that you know be observing in flux and then we obtained out of the distribution of counts with exposure and figured well you know even then get the linear model and basically here you can see the media model that is the dashed line and then once you have the linear model with incorrect for the linearity by different coefficients and so here on this diamond curve is what you see that counts that is corrected after linearity whereas this is how it behaves so there’s this shift that we have been missing because of the nonlinear response of the detector so once all this is done the detective characteristics is performed we go not to how does the camera as an instrument perform for its seeing and went straight function so as we know the altitude at located is 2005 the average atmospheric seeing W cleanest one our second so theory says that the PSF of source should not vary with magnitude of brightness whereas we need to measure that if that mainly happens with the instrument so what we did was we did observations of for various objects at establish open clusters in the nearby galaxy plane which total three thousand foreign 42 sources over 13 nights and then we obtained the piercer for each source and then we plotted it along is magnitude in each of the broadband filters and what you can see is that you know the dispersion of this FWHM values increases as we go to finger manicures however this discussion is not uniform because sometimes when we are at effector magnitudes we are looking at extended sources such as galaxies and that not basically stars and their PSF is not the same as a star so once we do that if we want to see how does the PSF vary across the field of view of the detector so for each FWHM measurement of the star b then you know plot to made a contour plot across the full field of your kanika in j:h and japan and what you see is that the front rows represent the radial increase of the piece of values from the center towards the edge of the detector and these changes are 10% between the simple 4 into 4 that is shown in the box so these variation and PSF is believed to be due to the optical aberrations in the camera so what from this study will be conclude is that the central field of view a 400 watt meter is the best for point sources ovations once this is done we need to do photometric calibrations at the same point sources which should be measure the instrumental magnitude and then you obtain the zero-point when comparing it with the data from the 2mass all-sky survey and here is an example how zero-point measure so this each each points in the blood is the difference between magnitude of two

months and magnacube instrumental magnitude of one observations so if there is outliers that we can manually exclude and then we fit a straight line here horizontal line and to get the zero point of this instrument so such a from the observations earlier we did the similar technique in order to obtain the zero points so the next question we want to ask is how does zero point value at each night is there any variation or cable in the region so what we do is here in the plot you see that this for each point represents the average zero point of that night and the error part is the standard deviation of that night and then as you could as you see that over a period of 30 nights the standard deviation in the zero points measurements is around point zero five magnitude and the mean average zero point that we get from all the observations in jhn case twenty point five point six less from two however we need to note that the zero points are affected by the changes in seeing conditions and hence what we do here is correct the zero point of each stuff so that is your a museum point of a little star and then off the field that we observe and then off and then be corrected with the average zero point of kanika that somehow minimizes the effects that is caused by seeing atmospheric effects see so the next is how do you 0 points vary with color and magnitude so here is the plot of zero point with too much color that’s obtained for the same source of 3042 sources and we see that there is fitting a line to this distribution because there is slope is really negligible indicating that indicating that though there is no color and the filters that are we are using the connect standardized at the same time if we go to the plot under on the right hand side we see that the dispersion in zero point is uniform and as you go to finger magnitudes and the sum what we see is that you know around magnitudes up to 30 magnet jerk you know the dispersion is less than point zero point Mac that basically gives us the accuracy of the instrument once we finish doing that we need to see how to zero point scale across the field of us weather so unlike the figure that I showed earlier where there was a radial increase of the PSL here what we see that the zero points remain consistent at the centre for our field of view and this indicates that the photometric does not affect the quality of law does not has not been affected because of the optical aberrations and once we see that we can say that you know photography that we can perform for sources that’s well within the forward for our community the field would be best once we finish measuring the zero point we need to see that what is the best magnitude that we can observe with the instrument so here are the top what I’m showing is a theoretical flood has estimated from this equation where you have the zero point one signal-to-noise ratio and all these parameters so at the plot plot here you see this is the magnitude limit for signal to my suit or in a signal to noise ratio of F at an integration time of 900 seconds theoretically this is what we get eighteen point five magnitude in Japan seventeen point six magnitude and H and sixteen magnitude and gate in order to test what we see in theory we observe a standard start a is forty and fixed an integration time of nine hundred seconds so here is the plot of all the stars in this field and different signal think of the stars for many seconds of course though in a signal to noise ratio and in the observation finance section we have been days pull it in and system in point friend so these values are similar to what we obtain theoretically so finally to summarize all the detector characteristics and store site performance of the instrument that we measured here we see that we obtained the conversion in that and without noise as well as we measure the zero point magnitude and in the limiting magnitude of the instrument so this basically answers our previous question of what characteristic and sky performance should the instrument eally once we finish this we go to the next question what software tools and calibration methods are required to transform the data that we get into some static form for this I introduce a chapter for calibrations of the instrument that has been recently published in pacified so in order to begin we need to first understand the effects from atmosphere in your breath so the atmosphere mainly contains the three lines that dominant at h-man and then the background emission starts to dominate as we go from K and father so we need to observe such that beneath

facilitate the observations to remove the atmospheric effects so the standard methodology that we use in the effort is to deter the images and like an optical where you can directly just subscribe to my ass we need to deter the images in near-infrared and so for polarization observations we we expose for editor for all the four angles of the affected and this is repeated percept and then once you even combine this little image we can actually get the sky image that contains all these emissions here and typically this diameter is not too large is somewhere around 30 arcsecond where starts to not overlap at the same time and study the field of view is not compromised so the observing for polygon B uses somewhere around 15 meters that also allows us to obtain a high signal-to-noise ratio image when you combine once we define the observing if we need to see how to remove other effects such as pixel to pixel variation an illumination profile index passing in the detector so here we basically especially for me and it will polarimetry we use a new technique where we have in don’t flash with lights on and off and there then difference an average to obtain a master flat and then the master’s life is shown here in the figure and here you see the large scale illumination profile as well as a small pixel pixel variation so this is so in order to correct that then you divide by the normalized flat which has been known as to this regions of the pixel once finishing this we now have an idea of how we can make software pipelines in order to get sense quality itta so each image that we get through wall again basically the main effects other than other effects that we have is from the detectors linearity dark current and pixel to pixel variation the effects from the optical are not uniform illumination profiles thermal emission from local substance and the test test rain the optics and from atmosphere we have oh it’s emission turbulence pipe transmissions so we develop the new software in highrock four core values that have been adapted from earlier programs ever by collaborative unity Jeremiah so in order to begin data processing I just want to highlight that you know for linearity Corrections we already established our methods earlier in the chapter and then for darker we showed that how to measure our common document and for acceptor pixel and all these three points I showed that week you know how we obtained a flat image and for the atmospheric effects we have showed that by determine we can obtain the sky image so basically other necessary images and corrections in for data processing have been established by now so if we go back to the steps of data processing what we do is that once the images with the polarimeter are obtained we basically combine or group them depending on their exposure time bitter and the angles of that and the first step is to correct for linearity and the linearity is corrected for the seediest images and then we subtract the card and then we after subtracting the dark we flat finding image basically using the master plan once the images are flat filled with then we need to in this time image so the sky image is obtained by media combining all the vector images that is after flattening and then you subtract whatever the resulting Skype to this and finally then because the images have been deterred you need to align them so you measure the shifts in the ticker and then you align by reading the comments star in each reference image and then you have reached combined the images to attain the final image for one polarimeter angle and then these are then transformed and brought to the good field of view and then basically all the point sources in this image of things are will be selected once this is performed let’s uh go to the next important step that is that earlier and I pointed out that once in radius we need to select the point sources and also the essential for all polarimetry is a pressure photometric so we need to measure flux of the source and the flux of the source is measured by a synthetic aperture and here we use a new pipeline developed a non-ideal it’s called cluster Pole and so this basically uses task that is from development doubtful so in order to get what optimum approach do we measure the flux of the star and not the noise or other background contamination as we need to see the aperture radius so to do that we first calculate the signal-to-noise ratio of the source

added in aperture of 10 pixels and then once we owe in the signal-to-noise ratio then me then plotted against different approach or radius and then empirically we found out that at this signal-to-noise ratio the best pressure to be used for photo material 7 and then as the signal-to-noise ratio increases and we use larger aperture so this is done separately for each star and during image reduction process and so once we know that we reduce the images we have selected the point sources now it’s time to obtain the full width planimetric analysis so that is that so here here you see a 0.5 to pick are the fluxes of each star at different angles of the effective so we get the Stokes parameter skew menu from these fluxes and then we correct for other instrumental effects and then converted in the equatorial forms and then we finally measure the polarization question age as well as the position angle of the star so the steps involved here are than you basically you’d connect for half wave plate zero phase offset angle as well as instrumental polarization which I’ll explain later and then we converted it be by acid because here there’s a quadrature combination of Stokes Q and you so the always the polarization measurements and bias and then finally the output what we get from all the pipeline is that we get the hardness of the star its magnitude its Stokes parametres its polarizations position and go on the arrow and these are finally used to make the map of the polarization so here in the speaker what you see is that on the left is a rock image that’s obtained from the instrument and then this is after reduction the first stage of reduction and this is after doing the polarimetric analysis so there’s a big difference and you see the measurements so each vector has a certain length depending on its magnitude so here you see a reference track listing 3% and the position angles are measured from north towards the left you know and predictions so once the data processing is done we not need to come back to the equation where instrument polarization should be established so this is a a key important step especially when we are measuring low levels of polarization one person to three person so we need to remove the instrumental polarization carefully in order to get accurate polarization so the globular clusters are religious that have shown that they are basically the minimum or goes to zero polarization level so any polarization that’s measured from an observation of globular cluster indicates that it is from the instrument because the stars do not have any polarization so earlier studying by level show that my hands down best large so here is how a fight looks on the field of kanika so it basically covers all the stars covered almost the entire field of your planet of 5 25 minutes and so then if we spread this observation at different points we can easily map instrument organization so this observer of the observations of globular cluster was carried out for the last three years and we made 37 observations with a fixed exposure time in order to have a large statistics set of data and here what I see is that first to see that how do the values of globular cluster beta with tiny over the last three years so each point is the mean instrumental polarization calculated from all the stars of the globular cluster and then here is their Stokes Q&U of 37 ends at the center of the grand mean so the standard deviation all is around 0.2 percent that is a time-dependent variation over the last three years of this globular cluster but what’s important is we need to find the mean instrumental polarization off of polycast so what we did is for each star in the field we measured it stops Q&U and then we combine the measurements of all the 37 observations eventually leading to ten thousand seven hundred stars with instrumental values and then we distributed them in a histogram and whatever the Gaussian fitted a bus into the distribution to get the mean instrumental stores so we thought you can see is that the instrumental polarization Q is point five point is point five percent and use 0.1 percent once we have measured the mean value we need to see is there any change of instrumental polarization on the position so here is a map of the instrument of polarization Q U and P basically plotted up from the values of

each star a contour map and what we resulting we see is that the chain listen across the field of view 0.04 person which is very minimum so this basically indicates that the polarizing elements of our instrument polygon is in front of the camera so basically the effects of instrumental polarizing that is a fact that that is arising from the optics inside the camera is minimize the only effects that we have is from the telescope mirrors and that’s why we have a very low instrument polarization so once we have mentioned that we need to see what is the half wavelet offset angle of the instrument basically what this is is that when we set them for learning either the half wave plate angle is not aligned to the equatorial nerve so basically in we need to find out what is the offset angle so what we do is that we observe statics and then find it’s the position angle and then we see the difference of what is published and what is the question and will be get and then that difference gives us the position I have a plate so here basically we observed our matrix which is HD 385 63 C for our period of 99 of polarization and the corrected position angle so basically what you can see is that the published values and what we obtain here are matching and this finally what we obtain is the angle as 113 activity so now we finished our writing software pipelines calibrating the instruments so we go back to the next question that is what how does the instrument performing comparison to archival data so what we did was being reminder your being reminded our science pool that was to map large regions of clouds and filaments so here we choose a region that is from the polarization survey and a region that has high stellar density high polarization levels and it has a position angles uniformly and decisive on 22:12 architects at this distance plan split around close to ten parsecs and basically the field of view only can the good useful field of view is for art so to map this large region we had to have different point things where as this black box shows the field of view of the chip itself way whereas our field of view spot so basically in order to cover this entire region we have made fifteen point six point things with our instrument and then mapped it and with keeping an exposure time fixed to 20 seconds that 15 meters for this total integration time for this entire set of observation took seven point five hours now once we finished math matching we want to see what to be detect from these observations so here is the statistics of what we obtain from our instrument so what we see is totally we have a stellar detection of 13,000 stars in the field and here is a cumulative distribution of what stars we detect and we’re just our instrument polygons eclipse is blue and the green is two month so you can see that in the distribution polycon reaches depths higher then put the service in versus the number of stars detective is higher similarly the stars in the field now showed 9,000 structured polarization sections and we had only around 50% too much matches and the stellar density was excellent 30 to 40 stars per unit so he stars polarization gives us a magnetic field direction and 50% starts at magnitude brighter than 14 which is now which again shows that we can probe larger extinction deeper so once we know the stellar properties which seen photometric properties and here on the left-hand y axis you see the signal of each source and then here’s the magnitude error so basically the signal-to-noise ratio which we obtain here again to up to faint Almighty to 15.5 and the error in magnitude that we get is on one person up to that in magnitude so these photometric values were then also compared to much photometric to see how accurate is a photometer and we see that there’s the difference between our magnet f├╝ssen and the tumors magnitudes and the error bars indicates the magnitude error in the too much so the dispersion or the photometric accuracy is better than one person four stars after tucking my nephew so once we know the photometric properties we see the polarimetric properties which is basically the uncertainty of our polarization measurements how good or what is the reliability of each data each band polarization of each stuff

should be plotted with the polarization uncertainty and we see that starts up to 30 in magnitude have an uncertainty so only up to one person so that shows that our accuracies are below one person so that in order to actually choose stars that are reliable we need to divide or classify based on the uncertainty of polarization so we created usage flags from what data we get and show that your people usage like zero is the best in terms of quality which as a uncertainity less than one ninety two brighter than thirty and the polarization signal-to-noise so these will actually give you an accurate tracing of plaintiffs than magnetic indirect and similarly you have equal to one as uncertainly less than 1 in magnitude 30 and then we go to the fainter stars where incidentally is larger which give you a course direction of magnetic field so once we have established the stellar photometric and polarimetric properties now we need to see how do our observation slope so we go back to the map and then we plot our polarization values comparing to also the cheapest away what you see here is the pollak and it has plotted in blue and the cheapest is shown in red and visually most of the polarization vectors and the magnitude align but – in order to see statistically see it’s bad news we then find the difference between the cheap polygon and g-tubes polarization values n versus the position eigen values and you see here that this is done for total of thousand three hundred stars – in order to get accurate comparisons and then once you see we see that the dispersion increases to fake their magnitudes but it’s somewhere accurate for greater magnitudes and you see the value we fit that these we’ve plotted the Instagram distribution of both these differences and fit in a basket basically to get the standard deviation which gives the accuracy of our instrument so here you see the polarization differences the standard deviation is 0.4% picture first and similarly the standard deviation here of the position address that we get is less than 4 degrees so this basically sets the limit for our instrument for certain magnitude that is for starts with your feet to one category and the better so finally we if we I summarized the instrument what we get is is the important materials there’s the point what make four stars brighter than 13 and polarimetric accuracy person angle its accuracy and the instrumental polarization the right so all these thing and they’re our previous questions as as how does the instrument perform compared to archival data what’s its performance now from what people think here we want to relate it back to an original science code and it’s are these things could be able to relate it with these values so here if we compare from my results what areas we covered from few passes to subpar sex is that here we covered a large area from our 10 our six so how much extinction did before what what expanded hips but we went up to depths of 18 magnitude what accuracy be in order to measure what to three percent values what accuracy do we need so that is we have an accuracy better than 0.5% similarly a stellar density of pain that this field is more than what we required and as well as the singing of the nicely is basically some sub that are consumed at is ready for science observations so when their goals are being fulfilled we go to the last question that is what do regions can be studied with this instrument for this I introduce you molecular club statistics that as we know is earlier published by Blue Knight in conference proceedings and so here is the region of molecular clocks and we six basically it was identified from an iris source Circle which is located here and this is a to must image and the dark top cloud is seen by the dark patch aware the Stars and so this was then he used the 13 Co GRS survey the molecular data and then define the clouds you know based on information from breakfast and then you know this is the morphology of the cloud that we get from the molecule data so once we know there are size of the cloud in itself physical morphology now that we can target to map this with our estimate so basically this the cloud has a size of 1 2424 our opinion on the field of view again this is a large

region so we have to do multiple point things with our instrument to map the synthetic cloud so this one 36 point things and which eventually 23 point things covered in all the boundaries of the cloud which we observe successfully with an exposure time 30 seconds leading 2000 so on images and after analyzing using your own pipeline and theta we we obtain six thousand seven hundred ninety seven stars in this field with two thousand stars to much matches so let’s see how does the but before moving to the magnetic field direction the polarization detections from this region either can be from stars behind the cloud or in front of the club so in order to separate the stars that are foreground and background we use the standard method of color to the diagram and then finding the color of each star using the Tomas data and then finding the stars that are more behind the cloud basically are more wrecked and having a higher color mixes so we set the category of J minus H greater than want to say that these stars are giving the cloud and these Casa foreground stars so only polarization values from these stars will be useful to map the magnetic field directions and hence now we come back to this large map applause and I will show you the plot of polarization magnetic in directions on this clock and what here you see is that I’ve plotted values only for our reliability of U equal to one and each vector here is directly repressed in the plane of sky magnetic field Direction that’s one component of the e field and at the bottom is the main galactic field that is obtained from client observation so you see that the magnetic field direction of this cloud is an orientation somewhere around 30 degree comparing to the midfield but statistically analyzed as well as the axis of there’s a different direction which now relates back to the theory that I showed in the second slide where the field lines with the theory says that it’s perpendicular to the clubs long axis which what indicates or what we can interpret with these preliminary observations is that the cloud can gravitationally collapse along the field lines basically which will form like a filament like structure and then you know there is text regions here which are the clumps of course when you start school can be formed at these edges and eventually you know we see that here the role of B field is somewhere on this it has a significant relation how matter flows along the V direction so this is still a preliminary observation and this is the results that we obtained with the instrument and there is more analysis that we can do so with this I and the description of the work that have been presented in the thesis and I come to conclusion and future work so including remarks is that we have actually successfully calibrated characterized developed software’s for the connector camera as well as the polygons and so first step we did was we characterized the detector obtaining its various parameters key parameters that are useful for observation point and then we obtained how the camera performance on the sky we obtain zero whines limiting magnitude as well as the variations on the field of view to snow how important or where should our stars be positional people then we also define various operational parameters for the Pilate meter basically and then observing strategies how to deter how many number of little images should be obtained to obtain a high signal-to-noise ratio and then we developed a robust of data reduction and polarimetric analysis pipeline in order to convert the site the raw data into science quality form and then you know we obtain this instrumental polarization as well as the halfway bit of the tango and then we perform some sample observations to compare the results of our instruments and what accuracies that we achieved are they meeting the goals for magnetic field study and then finally being a very preliminary observations of a molecular cloud to reveal the last cave magnet typical structure in this region so with this

what else can be done from this instrument what else can be done with the data that we have is that you know once we obtain the magnetic field model she it’s important to obtain a magnetic field strength so field strength can be obtained from a combination of the polarization observations that we have as well as with the spectral data that gives us the local as the last in what intensity so this equation is basically if I give you an analogy if you have a guitar and then there’s strips if each string represents a magnetic field line and if the strength of the string or the tension of the string is the strength of the magnetic field of des coordination the vibration is less basically indicating that which is basically translated into polarization this question’s flow if the strength is weak the vibration of the string is larger so the polarization dispersion is more so eventually we can calculate the plane of sky magnetic field strength which then finally can be related calculating the master flux ratio that give us whether the region itself gravitationally bound to collapse or the magnetic field is in between the collapse of the region by clicking the master switches now once we know this we will have a better understanding of how with clouds form stars or not additionally you know with polar while you get this women we’re only mapping Laskin magnetic in structure but we can combine other women such as Sofia blackboard but they can fly to give a completely multi-skilled way of magnetic field from large clouds to dense force to eventually disk which gives you a full picture of star formation process and its relation to magnetic field to give actually better understand our journey and our understanding of our observable universe so with that I end my talk in terms for your patience and here’s a map of the magnetic field obtained by Planck